00:00:00.001 Started by upstream project "autotest-nightly" build number 4284 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3647 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.068 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.068 The recommended git tool is: git 00:00:00.069 using credential 00000000-0000-0000-0000-000000000002 00:00:00.073 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.097 Fetching changes from the remote Git repository 00:00:00.100 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.128 Using shallow fetch with depth 1 00:00:00.128 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.128 > git --version # timeout=10 00:00:00.172 > git --version # 'git version 2.39.2' 00:00:00.172 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.219 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.219 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:09.969 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.979 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:09.990 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:09.990 > git config core.sparsecheckout # timeout=10 00:00:10.001 > git read-tree -mu HEAD # timeout=10 00:00:10.015 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:10.038 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:10.039 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:10.140 [Pipeline] Start of Pipeline 00:00:10.154 [Pipeline] library 00:00:10.156 Loading library shm_lib@master 00:00:10.157 Library shm_lib@master is cached. Copying from home. 00:00:10.176 [Pipeline] node 00:00:10.187 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:10.189 [Pipeline] { 00:00:10.202 [Pipeline] catchError 00:00:10.204 [Pipeline] { 00:00:10.221 [Pipeline] wrap 00:00:10.232 [Pipeline] { 00:00:10.242 [Pipeline] stage 00:00:10.245 [Pipeline] { (Prologue) 00:00:10.466 [Pipeline] sh 00:00:10.749 + logger -p user.info -t JENKINS-CI 00:00:10.769 [Pipeline] echo 00:00:10.771 Node: GP11 00:00:10.779 [Pipeline] sh 00:00:11.079 [Pipeline] setCustomBuildProperty 00:00:11.090 [Pipeline] echo 00:00:11.092 Cleanup processes 00:00:11.098 [Pipeline] sh 00:00:11.382 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.382 2769397 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.393 [Pipeline] sh 00:00:11.676 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.676 ++ grep -v 'sudo pgrep' 00:00:11.676 ++ awk '{print $1}' 00:00:11.676 + sudo kill -9 00:00:11.676 + true 00:00:11.690 [Pipeline] cleanWs 00:00:11.699 [WS-CLEANUP] Deleting project workspace... 00:00:11.699 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.706 [WS-CLEANUP] done 00:00:11.711 [Pipeline] setCustomBuildProperty 00:00:11.728 [Pipeline] sh 00:00:12.015 + sudo git config --global --replace-all safe.directory '*' 00:00:12.111 [Pipeline] httpRequest 00:00:12.728 [Pipeline] echo 00:00:12.730 Sorcerer 10.211.164.20 is alive 00:00:12.742 [Pipeline] retry 00:00:12.744 [Pipeline] { 00:00:12.759 [Pipeline] httpRequest 00:00:12.764 HttpMethod: GET 00:00:12.764 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.765 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.769 Response Code: HTTP/1.1 200 OK 00:00:12.769 Success: Status code 200 is in the accepted range: 200,404 00:00:12.769 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.762 [Pipeline] } 00:00:13.777 [Pipeline] // retry 00:00:13.783 [Pipeline] sh 00:00:14.110 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.125 [Pipeline] httpRequest 00:00:14.492 [Pipeline] echo 00:00:14.494 Sorcerer 10.211.164.20 is alive 00:00:14.501 [Pipeline] retry 00:00:14.503 [Pipeline] { 00:00:14.515 [Pipeline] httpRequest 00:00:14.518 HttpMethod: GET 00:00:14.519 URL: http://10.211.164.20/packages/spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:00:14.519 Sending request to url: http://10.211.164.20/packages/spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:00:14.535 Response Code: HTTP/1.1 200 OK 00:00:14.536 Success: Status code 200 is in the accepted range: 200,404 00:00:14.536 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:01:23.087 [Pipeline] } 00:01:23.105 [Pipeline] // retry 00:01:23.113 [Pipeline] sh 00:01:23.428 + tar --no-same-owner -xf spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:01:26.001 [Pipeline] sh 00:01:26.287 + git -C spdk log --oneline -n5 00:01:26.288 f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:01:26.288 8d982eda9 dpdk: add adjustments for recent rte_power changes 00:01:26.288 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:01:26.288 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:01:26.288 029355612 bdev_ut: add manual examine bdev unit test case 00:01:26.300 [Pipeline] } 00:01:26.316 [Pipeline] // stage 00:01:26.327 [Pipeline] stage 00:01:26.330 [Pipeline] { (Prepare) 00:01:26.347 [Pipeline] writeFile 00:01:26.363 [Pipeline] sh 00:01:26.648 + logger -p user.info -t JENKINS-CI 00:01:26.663 [Pipeline] sh 00:01:26.949 + logger -p user.info -t JENKINS-CI 00:01:26.963 [Pipeline] sh 00:01:27.249 + cat autorun-spdk.conf 00:01:27.250 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.250 SPDK_TEST_NVMF=1 00:01:27.250 SPDK_TEST_NVME_CLI=1 00:01:27.250 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.250 SPDK_TEST_NVMF_NICS=e810 00:01:27.250 SPDK_RUN_ASAN=1 00:01:27.250 SPDK_RUN_UBSAN=1 00:01:27.250 NET_TYPE=phy 00:01:27.258 RUN_NIGHTLY=1 00:01:27.262 [Pipeline] readFile 00:01:27.287 [Pipeline] withEnv 00:01:27.289 [Pipeline] { 00:01:27.301 [Pipeline] sh 00:01:27.588 + set -ex 00:01:27.589 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:27.589 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:27.589 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.589 ++ SPDK_TEST_NVMF=1 00:01:27.589 ++ SPDK_TEST_NVME_CLI=1 00:01:27.589 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.589 ++ SPDK_TEST_NVMF_NICS=e810 00:01:27.589 ++ SPDK_RUN_ASAN=1 00:01:27.589 ++ SPDK_RUN_UBSAN=1 00:01:27.589 ++ NET_TYPE=phy 00:01:27.589 ++ RUN_NIGHTLY=1 00:01:27.589 + case $SPDK_TEST_NVMF_NICS in 00:01:27.589 + DRIVERS=ice 00:01:27.589 + [[ tcp == \r\d\m\a ]] 00:01:27.589 + [[ -n ice ]] 00:01:27.589 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:27.589 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:27.589 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:27.589 rmmod: ERROR: Module irdma is not currently loaded 00:01:27.589 rmmod: ERROR: Module i40iw is not currently loaded 00:01:27.589 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:27.589 + true 00:01:27.589 + for D in $DRIVERS 00:01:27.589 + sudo modprobe ice 00:01:27.589 + exit 0 00:01:27.598 [Pipeline] } 00:01:27.612 [Pipeline] // withEnv 00:01:27.617 [Pipeline] } 00:01:27.630 [Pipeline] // stage 00:01:27.639 [Pipeline] catchError 00:01:27.641 [Pipeline] { 00:01:27.654 [Pipeline] timeout 00:01:27.654 Timeout set to expire in 1 hr 0 min 00:01:27.656 [Pipeline] { 00:01:27.671 [Pipeline] stage 00:01:27.674 [Pipeline] { (Tests) 00:01:27.689 [Pipeline] sh 00:01:27.973 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.973 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.973 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.973 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:27.973 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:27.974 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:27.974 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:27.974 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:27.974 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:27.974 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:27.974 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:27.974 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.974 + source /etc/os-release 00:01:27.974 ++ NAME='Fedora Linux' 00:01:27.974 ++ VERSION='39 (Cloud Edition)' 00:01:27.974 ++ ID=fedora 00:01:27.974 ++ VERSION_ID=39 00:01:27.974 ++ VERSION_CODENAME= 00:01:27.974 ++ PLATFORM_ID=platform:f39 00:01:27.974 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:27.974 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:27.974 ++ LOGO=fedora-logo-icon 00:01:27.974 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:27.974 ++ HOME_URL=https://fedoraproject.org/ 00:01:27.974 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:27.974 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:27.974 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:27.974 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:27.974 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:27.974 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:27.974 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:27.974 ++ SUPPORT_END=2024-11-12 00:01:27.974 ++ VARIANT='Cloud Edition' 00:01:27.974 ++ VARIANT_ID=cloud 00:01:27.974 + uname -a 00:01:27.974 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:27.974 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:28.915 Hugepages 00:01:28.915 node hugesize free / total 00:01:28.915 node0 1048576kB 0 / 0 00:01:28.915 node0 2048kB 0 / 0 00:01:28.915 node1 1048576kB 0 / 0 00:01:28.915 node1 2048kB 0 / 0 00:01:28.915 00:01:28.915 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:28.915 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:28.915 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:28.915 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:28.915 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:28.915 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:28.915 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:28.915 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:28.915 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:28.915 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:28.915 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:28.915 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:28.915 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:28.915 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:28.915 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:28.915 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:28.915 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:28.915 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:28.915 + rm -f /tmp/spdk-ld-path 00:01:28.915 + source autorun-spdk.conf 00:01:28.915 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.915 ++ SPDK_TEST_NVMF=1 00:01:28.915 ++ SPDK_TEST_NVME_CLI=1 00:01:28.915 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.915 ++ SPDK_TEST_NVMF_NICS=e810 00:01:28.915 ++ SPDK_RUN_ASAN=1 00:01:28.915 ++ SPDK_RUN_UBSAN=1 00:01:28.915 ++ NET_TYPE=phy 00:01:28.915 ++ RUN_NIGHTLY=1 00:01:28.915 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:28.915 + [[ -n '' ]] 00:01:28.915 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.915 + for M in /var/spdk/build-*-manifest.txt 00:01:28.915 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:28.915 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.915 + for M in /var/spdk/build-*-manifest.txt 00:01:28.915 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:28.915 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.915 + for M in /var/spdk/build-*-manifest.txt 00:01:28.915 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:28.915 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.915 ++ uname 00:01:28.915 + [[ Linux == \L\i\n\u\x ]] 00:01:28.915 + sudo dmesg -T 00:01:29.174 + sudo dmesg --clear 00:01:29.174 + dmesg_pid=2770191 00:01:29.174 + [[ Fedora Linux == FreeBSD ]] 00:01:29.174 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:29.174 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:29.174 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:29.174 + sudo dmesg -Tw 00:01:29.174 + [[ -x /usr/src/fio-static/fio ]] 00:01:29.174 + export FIO_BIN=/usr/src/fio-static/fio 00:01:29.174 + FIO_BIN=/usr/src/fio-static/fio 00:01:29.174 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:29.174 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:29.175 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:29.175 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:29.175 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:29.175 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:29.175 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:29.175 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:29.175 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:29.175 20:51:02 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:29.175 20:51:02 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:29.175 20:51:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.175 20:51:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:29.175 20:51:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:29.175 20:51:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.175 20:51:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:29.175 20:51:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_ASAN=1 00:01:29.175 20:51:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:29.175 20:51:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:29.175 20:51:02 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:01:29.175 20:51:02 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:29.175 20:51:02 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:29.175 20:51:02 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:29.175 20:51:02 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:29.175 20:51:02 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:29.175 20:51:02 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:29.175 20:51:02 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:29.175 20:51:02 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:29.175 20:51:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.175 20:51:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.175 20:51:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.175 20:51:02 -- paths/export.sh@5 -- $ export PATH 00:01:29.175 20:51:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.175 20:51:02 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:29.175 20:51:02 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:29.175 20:51:02 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732045862.XXXXXX 00:01:29.175 20:51:02 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732045862.s0gJGA 00:01:29.175 20:51:02 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:29.175 20:51:02 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:29.175 20:51:02 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:29.175 20:51:02 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:29.175 20:51:02 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:29.175 20:51:02 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:29.175 20:51:02 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:29.175 20:51:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.175 20:51:02 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:29.175 20:51:02 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:29.175 20:51:02 -- pm/common@17 -- $ local monitor 00:01:29.175 20:51:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.175 20:51:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.175 20:51:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.175 20:51:02 -- pm/common@21 -- $ date +%s 00:01:29.175 20:51:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.175 20:51:02 -- pm/common@21 -- $ date +%s 00:01:29.175 20:51:02 -- pm/common@25 -- $ sleep 1 00:01:29.175 20:51:02 -- pm/common@21 -- $ date +%s 00:01:29.175 20:51:02 -- pm/common@21 -- $ date +%s 00:01:29.175 20:51:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732045862 00:01:29.175 20:51:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732045862 00:01:29.175 20:51:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732045862 00:01:29.175 20:51:02 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732045862 00:01:29.175 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732045862_collect-cpu-load.pm.log 00:01:29.175 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732045862_collect-vmstat.pm.log 00:01:29.175 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732045862_collect-cpu-temp.pm.log 00:01:29.175 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732045862_collect-bmc-pm.bmc.pm.log 00:01:30.115 20:51:03 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:30.115 20:51:03 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:30.115 20:51:03 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:30.115 20:51:03 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:30.115 20:51:03 -- spdk/autobuild.sh@16 -- $ date -u 00:01:30.115 Tue Nov 19 07:51:03 PM UTC 2024 00:01:30.115 20:51:03 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:30.115 v25.01-pre-199-gf22e807f1 00:01:30.115 20:51:03 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:30.115 20:51:03 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:30.115 20:51:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:30.115 20:51:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:30.115 20:51:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.115 ************************************ 00:01:30.115 START TEST asan 00:01:30.115 ************************************ 00:01:30.115 20:51:03 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:30.115 using asan 00:01:30.115 00:01:30.115 real 0m0.000s 00:01:30.115 user 0m0.000s 00:01:30.115 sys 0m0.000s 00:01:30.115 20:51:03 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:30.115 20:51:03 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:30.115 ************************************ 00:01:30.115 END TEST asan 00:01:30.115 ************************************ 00:01:30.115 20:51:03 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:30.115 20:51:03 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:30.115 20:51:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:30.115 20:51:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:30.115 20:51:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.374 ************************************ 00:01:30.374 START TEST ubsan 00:01:30.374 ************************************ 00:01:30.374 20:51:03 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:30.374 using ubsan 00:01:30.374 00:01:30.374 real 0m0.000s 00:01:30.374 user 0m0.000s 00:01:30.374 sys 0m0.000s 00:01:30.374 20:51:03 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:30.374 20:51:03 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:30.374 ************************************ 00:01:30.374 END TEST ubsan 00:01:30.374 ************************************ 00:01:30.374 20:51:03 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:30.374 20:51:03 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:30.374 20:51:03 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:30.374 20:51:03 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:30.374 20:51:03 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:30.374 20:51:03 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:30.374 20:51:03 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:30.374 20:51:03 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:30.374 20:51:03 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:30.374 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:30.374 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:30.632 Using 'verbs' RDMA provider 00:01:41.191 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:51.174 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:51.174 Creating mk/config.mk...done. 00:01:51.174 Creating mk/cc.flags.mk...done. 00:01:51.174 Type 'make' to build. 00:01:51.174 20:51:24 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:01:51.174 20:51:24 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:51.174 20:51:24 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:51.174 20:51:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.174 ************************************ 00:01:51.174 START TEST make 00:01:51.174 ************************************ 00:01:51.174 20:51:24 make -- common/autotest_common.sh@1129 -- $ make -j48 00:01:51.174 make[1]: Nothing to be done for 'all'. 00:02:01.190 The Meson build system 00:02:01.190 Version: 1.5.0 00:02:01.190 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:01.190 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:01.190 Build type: native build 00:02:01.190 Program cat found: YES (/usr/bin/cat) 00:02:01.190 Project name: DPDK 00:02:01.190 Project version: 24.03.0 00:02:01.190 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:01.190 C linker for the host machine: cc ld.bfd 2.40-14 00:02:01.190 Host machine cpu family: x86_64 00:02:01.190 Host machine cpu: x86_64 00:02:01.190 Message: ## Building in Developer Mode ## 00:02:01.190 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:01.190 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:01.190 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:01.190 Program python3 found: YES (/usr/bin/python3) 00:02:01.190 Program cat found: YES (/usr/bin/cat) 00:02:01.190 Compiler for C supports arguments -march=native: YES 00:02:01.190 Checking for size of "void *" : 8 00:02:01.190 Checking for size of "void *" : 8 (cached) 00:02:01.190 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:01.190 Library m found: YES 00:02:01.190 Library numa found: YES 00:02:01.190 Has header "numaif.h" : YES 00:02:01.190 Library fdt found: NO 00:02:01.190 Library execinfo found: NO 00:02:01.190 Has header "execinfo.h" : YES 00:02:01.190 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:01.190 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:01.190 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:01.190 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:01.190 Run-time dependency openssl found: YES 3.1.1 00:02:01.190 Run-time dependency libpcap found: YES 1.10.4 00:02:01.190 Has header "pcap.h" with dependency libpcap: YES 00:02:01.190 Compiler for C supports arguments -Wcast-qual: YES 00:02:01.190 Compiler for C supports arguments -Wdeprecated: YES 00:02:01.190 Compiler for C supports arguments -Wformat: YES 00:02:01.190 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:01.190 Compiler for C supports arguments -Wformat-security: NO 00:02:01.190 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:01.190 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:01.190 Compiler for C supports arguments -Wnested-externs: YES 00:02:01.190 Compiler for C supports arguments -Wold-style-definition: YES 00:02:01.190 Compiler for C supports arguments -Wpointer-arith: YES 00:02:01.190 Compiler for C supports arguments -Wsign-compare: YES 00:02:01.190 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:01.190 Compiler for C supports arguments -Wundef: YES 00:02:01.190 Compiler for C supports arguments -Wwrite-strings: YES 00:02:01.190 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:01.190 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:01.190 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:01.190 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:01.190 Program objdump found: YES (/usr/bin/objdump) 00:02:01.190 Compiler for C supports arguments -mavx512f: YES 00:02:01.190 Checking if "AVX512 checking" compiles: YES 00:02:01.190 Fetching value of define "__SSE4_2__" : 1 00:02:01.190 Fetching value of define "__AES__" : 1 00:02:01.190 Fetching value of define "__AVX__" : 1 00:02:01.190 Fetching value of define "__AVX2__" : (undefined) 00:02:01.190 Fetching value of define "__AVX512BW__" : (undefined) 00:02:01.190 Fetching value of define "__AVX512CD__" : (undefined) 00:02:01.190 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:01.190 Fetching value of define "__AVX512F__" : (undefined) 00:02:01.190 Fetching value of define "__AVX512VL__" : (undefined) 00:02:01.190 Fetching value of define "__PCLMUL__" : 1 00:02:01.190 Fetching value of define "__RDRND__" : 1 00:02:01.190 Fetching value of define "__RDSEED__" : (undefined) 00:02:01.190 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:01.190 Fetching value of define "__znver1__" : (undefined) 00:02:01.190 Fetching value of define "__znver2__" : (undefined) 00:02:01.190 Fetching value of define "__znver3__" : (undefined) 00:02:01.190 Fetching value of define "__znver4__" : (undefined) 00:02:01.190 Library asan found: YES 00:02:01.190 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:01.190 Message: lib/log: Defining dependency "log" 00:02:01.190 Message: lib/kvargs: Defining dependency "kvargs" 00:02:01.190 Message: lib/telemetry: Defining dependency "telemetry" 00:02:01.190 Library rt found: YES 00:02:01.190 Checking for function "getentropy" : NO 00:02:01.190 Message: lib/eal: Defining dependency "eal" 00:02:01.190 Message: lib/ring: Defining dependency "ring" 00:02:01.190 Message: lib/rcu: Defining dependency "rcu" 00:02:01.190 Message: lib/mempool: Defining dependency "mempool" 00:02:01.190 Message: lib/mbuf: Defining dependency "mbuf" 00:02:01.190 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:01.190 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:01.190 Compiler for C supports arguments -mpclmul: YES 00:02:01.190 Compiler for C supports arguments -maes: YES 00:02:01.190 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:01.190 Compiler for C supports arguments -mavx512bw: YES 00:02:01.190 Compiler for C supports arguments -mavx512dq: YES 00:02:01.191 Compiler for C supports arguments -mavx512vl: YES 00:02:01.191 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:01.191 Compiler for C supports arguments -mavx2: YES 00:02:01.191 Compiler for C supports arguments -mavx: YES 00:02:01.191 Message: lib/net: Defining dependency "net" 00:02:01.191 Message: lib/meter: Defining dependency "meter" 00:02:01.191 Message: lib/ethdev: Defining dependency "ethdev" 00:02:01.191 Message: lib/pci: Defining dependency "pci" 00:02:01.191 Message: lib/cmdline: Defining dependency "cmdline" 00:02:01.191 Message: lib/hash: Defining dependency "hash" 00:02:01.191 Message: lib/timer: Defining dependency "timer" 00:02:01.191 Message: lib/compressdev: Defining dependency "compressdev" 00:02:01.191 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:01.191 Message: lib/dmadev: Defining dependency "dmadev" 00:02:01.191 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:01.191 Message: lib/power: Defining dependency "power" 00:02:01.191 Message: lib/reorder: Defining dependency "reorder" 00:02:01.191 Message: lib/security: Defining dependency "security" 00:02:01.191 Has header "linux/userfaultfd.h" : YES 00:02:01.191 Has header "linux/vduse.h" : YES 00:02:01.191 Message: lib/vhost: Defining dependency "vhost" 00:02:01.191 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:01.191 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:01.191 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:01.191 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:01.191 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:01.191 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:01.191 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:01.191 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:01.191 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:01.191 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:01.191 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:01.191 Configuring doxy-api-html.conf using configuration 00:02:01.191 Configuring doxy-api-man.conf using configuration 00:02:01.191 Program mandb found: YES (/usr/bin/mandb) 00:02:01.191 Program sphinx-build found: NO 00:02:01.191 Configuring rte_build_config.h using configuration 00:02:01.191 Message: 00:02:01.191 ================= 00:02:01.191 Applications Enabled 00:02:01.191 ================= 00:02:01.191 00:02:01.191 apps: 00:02:01.191 00:02:01.191 00:02:01.191 Message: 00:02:01.191 ================= 00:02:01.191 Libraries Enabled 00:02:01.191 ================= 00:02:01.191 00:02:01.191 libs: 00:02:01.191 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:01.191 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:01.191 cryptodev, dmadev, power, reorder, security, vhost, 00:02:01.191 00:02:01.191 Message: 00:02:01.191 =============== 00:02:01.191 Drivers Enabled 00:02:01.191 =============== 00:02:01.191 00:02:01.191 common: 00:02:01.191 00:02:01.191 bus: 00:02:01.191 pci, vdev, 00:02:01.191 mempool: 00:02:01.191 ring, 00:02:01.191 dma: 00:02:01.191 00:02:01.191 net: 00:02:01.191 00:02:01.191 crypto: 00:02:01.191 00:02:01.191 compress: 00:02:01.191 00:02:01.191 vdpa: 00:02:01.191 00:02:01.191 00:02:01.191 Message: 00:02:01.191 ================= 00:02:01.191 Content Skipped 00:02:01.191 ================= 00:02:01.191 00:02:01.191 apps: 00:02:01.191 dumpcap: explicitly disabled via build config 00:02:01.191 graph: explicitly disabled via build config 00:02:01.191 pdump: explicitly disabled via build config 00:02:01.191 proc-info: explicitly disabled via build config 00:02:01.191 test-acl: explicitly disabled via build config 00:02:01.191 test-bbdev: explicitly disabled via build config 00:02:01.191 test-cmdline: explicitly disabled via build config 00:02:01.191 test-compress-perf: explicitly disabled via build config 00:02:01.191 test-crypto-perf: explicitly disabled via build config 00:02:01.191 test-dma-perf: explicitly disabled via build config 00:02:01.191 test-eventdev: explicitly disabled via build config 00:02:01.191 test-fib: explicitly disabled via build config 00:02:01.191 test-flow-perf: explicitly disabled via build config 00:02:01.191 test-gpudev: explicitly disabled via build config 00:02:01.191 test-mldev: explicitly disabled via build config 00:02:01.191 test-pipeline: explicitly disabled via build config 00:02:01.191 test-pmd: explicitly disabled via build config 00:02:01.191 test-regex: explicitly disabled via build config 00:02:01.191 test-sad: explicitly disabled via build config 00:02:01.191 test-security-perf: explicitly disabled via build config 00:02:01.191 00:02:01.191 libs: 00:02:01.191 argparse: explicitly disabled via build config 00:02:01.191 metrics: explicitly disabled via build config 00:02:01.191 acl: explicitly disabled via build config 00:02:01.191 bbdev: explicitly disabled via build config 00:02:01.191 bitratestats: explicitly disabled via build config 00:02:01.191 bpf: explicitly disabled via build config 00:02:01.191 cfgfile: explicitly disabled via build config 00:02:01.191 distributor: explicitly disabled via build config 00:02:01.191 efd: explicitly disabled via build config 00:02:01.191 eventdev: explicitly disabled via build config 00:02:01.191 dispatcher: explicitly disabled via build config 00:02:01.191 gpudev: explicitly disabled via build config 00:02:01.191 gro: explicitly disabled via build config 00:02:01.191 gso: explicitly disabled via build config 00:02:01.191 ip_frag: explicitly disabled via build config 00:02:01.191 jobstats: explicitly disabled via build config 00:02:01.191 latencystats: explicitly disabled via build config 00:02:01.191 lpm: explicitly disabled via build config 00:02:01.191 member: explicitly disabled via build config 00:02:01.191 pcapng: explicitly disabled via build config 00:02:01.191 rawdev: explicitly disabled via build config 00:02:01.191 regexdev: explicitly disabled via build config 00:02:01.191 mldev: explicitly disabled via build config 00:02:01.191 rib: explicitly disabled via build config 00:02:01.191 sched: explicitly disabled via build config 00:02:01.191 stack: explicitly disabled via build config 00:02:01.191 ipsec: explicitly disabled via build config 00:02:01.191 pdcp: explicitly disabled via build config 00:02:01.191 fib: explicitly disabled via build config 00:02:01.191 port: explicitly disabled via build config 00:02:01.191 pdump: explicitly disabled via build config 00:02:01.191 table: explicitly disabled via build config 00:02:01.191 pipeline: explicitly disabled via build config 00:02:01.191 graph: explicitly disabled via build config 00:02:01.191 node: explicitly disabled via build config 00:02:01.191 00:02:01.191 drivers: 00:02:01.191 common/cpt: not in enabled drivers build config 00:02:01.191 common/dpaax: not in enabled drivers build config 00:02:01.191 common/iavf: not in enabled drivers build config 00:02:01.191 common/idpf: not in enabled drivers build config 00:02:01.191 common/ionic: not in enabled drivers build config 00:02:01.191 common/mvep: not in enabled drivers build config 00:02:01.191 common/octeontx: not in enabled drivers build config 00:02:01.191 bus/auxiliary: not in enabled drivers build config 00:02:01.191 bus/cdx: not in enabled drivers build config 00:02:01.191 bus/dpaa: not in enabled drivers build config 00:02:01.191 bus/fslmc: not in enabled drivers build config 00:02:01.191 bus/ifpga: not in enabled drivers build config 00:02:01.191 bus/platform: not in enabled drivers build config 00:02:01.191 bus/uacce: not in enabled drivers build config 00:02:01.191 bus/vmbus: not in enabled drivers build config 00:02:01.191 common/cnxk: not in enabled drivers build config 00:02:01.191 common/mlx5: not in enabled drivers build config 00:02:01.191 common/nfp: not in enabled drivers build config 00:02:01.191 common/nitrox: not in enabled drivers build config 00:02:01.191 common/qat: not in enabled drivers build config 00:02:01.191 common/sfc_efx: not in enabled drivers build config 00:02:01.191 mempool/bucket: not in enabled drivers build config 00:02:01.191 mempool/cnxk: not in enabled drivers build config 00:02:01.191 mempool/dpaa: not in enabled drivers build config 00:02:01.191 mempool/dpaa2: not in enabled drivers build config 00:02:01.191 mempool/octeontx: not in enabled drivers build config 00:02:01.191 mempool/stack: not in enabled drivers build config 00:02:01.191 dma/cnxk: not in enabled drivers build config 00:02:01.191 dma/dpaa: not in enabled drivers build config 00:02:01.191 dma/dpaa2: not in enabled drivers build config 00:02:01.191 dma/hisilicon: not in enabled drivers build config 00:02:01.191 dma/idxd: not in enabled drivers build config 00:02:01.191 dma/ioat: not in enabled drivers build config 00:02:01.191 dma/skeleton: not in enabled drivers build config 00:02:01.191 net/af_packet: not in enabled drivers build config 00:02:01.191 net/af_xdp: not in enabled drivers build config 00:02:01.191 net/ark: not in enabled drivers build config 00:02:01.191 net/atlantic: not in enabled drivers build config 00:02:01.191 net/avp: not in enabled drivers build config 00:02:01.191 net/axgbe: not in enabled drivers build config 00:02:01.191 net/bnx2x: not in enabled drivers build config 00:02:01.191 net/bnxt: not in enabled drivers build config 00:02:01.191 net/bonding: not in enabled drivers build config 00:02:01.191 net/cnxk: not in enabled drivers build config 00:02:01.191 net/cpfl: not in enabled drivers build config 00:02:01.191 net/cxgbe: not in enabled drivers build config 00:02:01.191 net/dpaa: not in enabled drivers build config 00:02:01.191 net/dpaa2: not in enabled drivers build config 00:02:01.191 net/e1000: not in enabled drivers build config 00:02:01.191 net/ena: not in enabled drivers build config 00:02:01.191 net/enetc: not in enabled drivers build config 00:02:01.191 net/enetfec: not in enabled drivers build config 00:02:01.191 net/enic: not in enabled drivers build config 00:02:01.191 net/failsafe: not in enabled drivers build config 00:02:01.191 net/fm10k: not in enabled drivers build config 00:02:01.191 net/gve: not in enabled drivers build config 00:02:01.191 net/hinic: not in enabled drivers build config 00:02:01.191 net/hns3: not in enabled drivers build config 00:02:01.192 net/i40e: not in enabled drivers build config 00:02:01.192 net/iavf: not in enabled drivers build config 00:02:01.192 net/ice: not in enabled drivers build config 00:02:01.192 net/idpf: not in enabled drivers build config 00:02:01.192 net/igc: not in enabled drivers build config 00:02:01.192 net/ionic: not in enabled drivers build config 00:02:01.192 net/ipn3ke: not in enabled drivers build config 00:02:01.192 net/ixgbe: not in enabled drivers build config 00:02:01.192 net/mana: not in enabled drivers build config 00:02:01.192 net/memif: not in enabled drivers build config 00:02:01.192 net/mlx4: not in enabled drivers build config 00:02:01.192 net/mlx5: not in enabled drivers build config 00:02:01.192 net/mvneta: not in enabled drivers build config 00:02:01.192 net/mvpp2: not in enabled drivers build config 00:02:01.192 net/netvsc: not in enabled drivers build config 00:02:01.192 net/nfb: not in enabled drivers build config 00:02:01.192 net/nfp: not in enabled drivers build config 00:02:01.192 net/ngbe: not in enabled drivers build config 00:02:01.192 net/null: not in enabled drivers build config 00:02:01.192 net/octeontx: not in enabled drivers build config 00:02:01.192 net/octeon_ep: not in enabled drivers build config 00:02:01.192 net/pcap: not in enabled drivers build config 00:02:01.192 net/pfe: not in enabled drivers build config 00:02:01.192 net/qede: not in enabled drivers build config 00:02:01.192 net/ring: not in enabled drivers build config 00:02:01.192 net/sfc: not in enabled drivers build config 00:02:01.192 net/softnic: not in enabled drivers build config 00:02:01.192 net/tap: not in enabled drivers build config 00:02:01.192 net/thunderx: not in enabled drivers build config 00:02:01.192 net/txgbe: not in enabled drivers build config 00:02:01.192 net/vdev_netvsc: not in enabled drivers build config 00:02:01.192 net/vhost: not in enabled drivers build config 00:02:01.192 net/virtio: not in enabled drivers build config 00:02:01.192 net/vmxnet3: not in enabled drivers build config 00:02:01.192 raw/*: missing internal dependency, "rawdev" 00:02:01.192 crypto/armv8: not in enabled drivers build config 00:02:01.192 crypto/bcmfs: not in enabled drivers build config 00:02:01.192 crypto/caam_jr: not in enabled drivers build config 00:02:01.192 crypto/ccp: not in enabled drivers build config 00:02:01.192 crypto/cnxk: not in enabled drivers build config 00:02:01.192 crypto/dpaa_sec: not in enabled drivers build config 00:02:01.192 crypto/dpaa2_sec: not in enabled drivers build config 00:02:01.192 crypto/ipsec_mb: not in enabled drivers build config 00:02:01.192 crypto/mlx5: not in enabled drivers build config 00:02:01.192 crypto/mvsam: not in enabled drivers build config 00:02:01.192 crypto/nitrox: not in enabled drivers build config 00:02:01.192 crypto/null: not in enabled drivers build config 00:02:01.192 crypto/octeontx: not in enabled drivers build config 00:02:01.192 crypto/openssl: not in enabled drivers build config 00:02:01.192 crypto/scheduler: not in enabled drivers build config 00:02:01.192 crypto/uadk: not in enabled drivers build config 00:02:01.192 crypto/virtio: not in enabled drivers build config 00:02:01.192 compress/isal: not in enabled drivers build config 00:02:01.192 compress/mlx5: not in enabled drivers build config 00:02:01.192 compress/nitrox: not in enabled drivers build config 00:02:01.192 compress/octeontx: not in enabled drivers build config 00:02:01.192 compress/zlib: not in enabled drivers build config 00:02:01.192 regex/*: missing internal dependency, "regexdev" 00:02:01.192 ml/*: missing internal dependency, "mldev" 00:02:01.192 vdpa/ifc: not in enabled drivers build config 00:02:01.192 vdpa/mlx5: not in enabled drivers build config 00:02:01.192 vdpa/nfp: not in enabled drivers build config 00:02:01.192 vdpa/sfc: not in enabled drivers build config 00:02:01.192 event/*: missing internal dependency, "eventdev" 00:02:01.192 baseband/*: missing internal dependency, "bbdev" 00:02:01.192 gpu/*: missing internal dependency, "gpudev" 00:02:01.192 00:02:01.192 00:02:01.192 Build targets in project: 85 00:02:01.192 00:02:01.192 DPDK 24.03.0 00:02:01.192 00:02:01.192 User defined options 00:02:01.192 buildtype : debug 00:02:01.192 default_library : shared 00:02:01.192 libdir : lib 00:02:01.192 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:01.192 b_sanitize : address 00:02:01.192 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:01.192 c_link_args : 00:02:01.192 cpu_instruction_set: native 00:02:01.192 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:01.192 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:01.192 enable_docs : false 00:02:01.192 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:01.192 enable_kmods : false 00:02:01.192 max_lcores : 128 00:02:01.192 tests : false 00:02:01.192 00:02:01.192 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:01.192 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:01.192 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:01.192 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:01.192 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:01.192 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:01.192 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:01.192 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:01.192 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:01.192 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:01.192 [9/268] Linking static target lib/librte_kvargs.a 00:02:01.192 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:01.192 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:01.192 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:01.192 [13/268] Linking static target lib/librte_log.a 00:02:01.192 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:01.192 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:01.192 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:01.768 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.768 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:01.768 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:01.768 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:01.768 [21/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:01.768 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:02.031 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:02.031 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:02.031 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:02.031 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:02.031 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:02.031 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:02.031 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:02.031 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:02.031 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:02.031 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:02.031 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:02.031 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:02.031 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:02.031 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:02.031 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:02.031 [38/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:02.031 [39/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:02.031 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:02.031 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:02.031 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:02.031 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:02.031 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:02.031 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:02.031 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:02.031 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:02.031 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:02.031 [49/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:02.031 [50/268] Linking static target lib/librte_telemetry.a 00:02:02.031 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:02.031 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:02.031 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:02.031 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:02.031 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:02.031 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:02.292 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:02.292 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:02.292 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:02.292 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:02.292 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:02.292 [62/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.292 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:02.292 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:02.292 [65/268] Linking target lib/librte_log.so.24.1 00:02:02.555 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:02.555 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:02.820 [68/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:02.820 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:02.820 [70/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:02.820 [71/268] Linking target lib/librte_kvargs.so.24.1 00:02:02.820 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:02.820 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:02.820 [74/268] Linking static target lib/librte_pci.a 00:02:02.820 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:02.820 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:03.080 [77/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:03.080 [78/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:03.080 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:03.080 [80/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:03.080 [81/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:03.080 [82/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:03.080 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:03.080 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:03.080 [85/268] Linking static target lib/librte_ring.a 00:02:03.080 [86/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:03.080 [87/268] Linking static target lib/librte_meter.a 00:02:03.080 [88/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:03.080 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:03.080 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:03.080 [91/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.080 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:03.080 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:03.080 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:03.080 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:03.080 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:03.080 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:03.080 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:03.080 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:03.080 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:03.080 [101/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:03.080 [102/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:03.080 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:03.080 [104/268] Linking target lib/librte_telemetry.so.24.1 00:02:03.080 [105/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:03.080 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:03.080 [107/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:03.347 [108/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:03.347 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:03.347 [110/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:03.347 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:03.347 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:03.347 [113/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:03.347 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:03.347 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:03.347 [116/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:03.347 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:03.347 [118/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.347 [119/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:03.347 [120/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:03.347 [121/268] Linking static target lib/librte_mempool.a 00:02:03.347 [122/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:03.347 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:03.347 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:03.610 [125/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:03.610 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:03.610 [127/268] Linking static target lib/librte_rcu.a 00:02:03.610 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:03.610 [129/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.610 [130/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:03.610 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:03.610 [132/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.610 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:03.887 [134/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:03.887 [135/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:03.887 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:03.887 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:03.887 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:03.887 [139/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:03.887 [140/268] Linking static target lib/librte_cmdline.a 00:02:04.148 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:04.148 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:04.148 [143/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:04.148 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:04.148 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:04.148 [146/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:04.148 [147/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:04.148 [148/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:04.148 [149/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.148 [150/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:04.148 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:04.409 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:04.409 [153/268] Linking static target lib/librte_eal.a 00:02:04.409 [154/268] Linking static target lib/librte_timer.a 00:02:04.409 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:04.409 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:04.409 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:04.409 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:04.409 [159/268] Linking static target lib/librte_dmadev.a 00:02:04.668 [160/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.668 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:04.668 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:04.668 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:04.668 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:04.668 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.668 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:04.927 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:04.927 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:04.927 [169/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:04.927 [170/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:04.927 [171/268] Linking static target lib/librte_net.a 00:02:04.927 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:04.927 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:04.927 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:04.927 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:04.927 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:04.927 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:04.927 [178/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.927 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.187 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:05.187 [181/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:05.187 [182/268] Linking static target lib/librte_power.a 00:02:05.187 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:05.187 [184/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:05.187 [185/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:05.187 [186/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:05.187 [187/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.187 [188/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:05.187 [189/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:05.447 [190/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:05.447 [191/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:05.447 [192/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:05.447 [193/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:05.447 [194/268] Linking static target drivers/librte_bus_pci.a 00:02:05.447 [195/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:05.447 [196/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:05.447 [197/268] Linking static target drivers/librte_bus_vdev.a 00:02:05.447 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:05.447 [199/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:05.447 [200/268] Linking static target lib/librte_hash.a 00:02:05.447 [201/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:05.447 [202/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.447 [203/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.447 [204/268] Linking static target drivers/librte_mempool_ring.a 00:02:05.447 [205/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:05.447 [206/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:05.447 [207/268] Linking static target lib/librte_reorder.a 00:02:05.447 [208/268] Linking static target lib/librte_compressdev.a 00:02:05.706 [209/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.706 [210/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.706 [211/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:05.706 [212/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.965 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.965 [214/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.965 [215/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.223 [216/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:06.223 [217/268] Linking static target lib/librte_security.a 00:02:06.482 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.740 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:07.307 [220/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:07.307 [221/268] Linking static target lib/librte_mbuf.a 00:02:07.874 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:07.874 [223/268] Linking static target lib/librte_cryptodev.a 00:02:07.874 [224/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.811 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:08.811 [226/268] Linking static target lib/librte_ethdev.a 00:02:08.811 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.189 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.447 [229/268] Linking target lib/librte_eal.so.24.1 00:02:10.447 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:10.447 [231/268] Linking target lib/librte_pci.so.24.1 00:02:10.447 [232/268] Linking target lib/librte_ring.so.24.1 00:02:10.448 [233/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:10.448 [234/268] Linking target lib/librte_meter.so.24.1 00:02:10.448 [235/268] Linking target lib/librte_timer.so.24.1 00:02:10.448 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:10.707 [237/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:10.707 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:10.707 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:10.707 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:10.707 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:10.707 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:10.707 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:10.707 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:10.707 [245/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:10.707 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:10.965 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:10.965 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:10.965 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:10.965 [250/268] Linking target lib/librte_reorder.so.24.1 00:02:10.965 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:10.965 [252/268] Linking target lib/librte_net.so.24.1 00:02:10.965 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:11.224 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:11.224 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:11.224 [256/268] Linking target lib/librte_security.so.24.1 00:02:11.224 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:11.224 [258/268] Linking target lib/librte_hash.so.24.1 00:02:11.483 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:11.483 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:12.860 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.119 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:13.119 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:13.378 [264/268] Linking target lib/librte_power.so.24.1 00:02:39.984 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:39.984 [266/268] Linking static target lib/librte_vhost.a 00:02:40.921 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.921 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:40.921 INFO: autodetecting backend as ninja 00:02:40.921 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:41.856 CC lib/ut_mock/mock.o 00:02:41.856 CC lib/ut/ut.o 00:02:41.856 CC lib/log/log.o 00:02:41.856 CC lib/log/log_flags.o 00:02:41.856 CC lib/log/log_deprecated.o 00:02:42.114 LIB libspdk_ut.a 00:02:42.114 LIB libspdk_ut_mock.a 00:02:42.114 LIB libspdk_log.a 00:02:42.114 SO libspdk_ut.so.2.0 00:02:42.114 SO libspdk_ut_mock.so.6.0 00:02:42.114 SO libspdk_log.so.7.1 00:02:42.114 SYMLINK libspdk_ut.so 00:02:42.114 SYMLINK libspdk_ut_mock.so 00:02:42.114 SYMLINK libspdk_log.so 00:02:42.372 CC lib/dma/dma.o 00:02:42.372 CXX lib/trace_parser/trace.o 00:02:42.372 CC lib/util/base64.o 00:02:42.372 CC lib/ioat/ioat.o 00:02:42.372 CC lib/util/bit_array.o 00:02:42.372 CC lib/util/cpuset.o 00:02:42.372 CC lib/util/crc16.o 00:02:42.372 CC lib/util/crc32.o 00:02:42.372 CC lib/util/crc32c.o 00:02:42.372 CC lib/util/crc32_ieee.o 00:02:42.372 CC lib/util/crc64.o 00:02:42.372 CC lib/util/dif.o 00:02:42.372 CC lib/util/fd.o 00:02:42.372 CC lib/util/fd_group.o 00:02:42.372 CC lib/util/file.o 00:02:42.372 CC lib/util/hexlify.o 00:02:42.372 CC lib/util/iov.o 00:02:42.372 CC lib/util/math.o 00:02:42.372 CC lib/util/net.o 00:02:42.372 CC lib/util/pipe.o 00:02:42.372 CC lib/util/strerror_tls.o 00:02:42.372 CC lib/util/string.o 00:02:42.372 CC lib/util/uuid.o 00:02:42.372 CC lib/util/xor.o 00:02:42.372 CC lib/util/zipf.o 00:02:42.372 CC lib/util/md5.o 00:02:42.372 CC lib/vfio_user/host/vfio_user.o 00:02:42.372 CC lib/vfio_user/host/vfio_user_pci.o 00:02:42.644 LIB libspdk_dma.a 00:02:42.644 SO libspdk_dma.so.5.0 00:02:42.644 SYMLINK libspdk_dma.so 00:02:42.903 LIB libspdk_ioat.a 00:02:42.903 SO libspdk_ioat.so.7.0 00:02:42.903 LIB libspdk_vfio_user.a 00:02:42.903 SO libspdk_vfio_user.so.5.0 00:02:42.903 SYMLINK libspdk_ioat.so 00:02:42.903 SYMLINK libspdk_vfio_user.so 00:02:43.162 LIB libspdk_util.a 00:02:43.162 SO libspdk_util.so.10.1 00:02:43.421 SYMLINK libspdk_util.so 00:02:43.421 CC lib/conf/conf.o 00:02:43.422 CC lib/vmd/vmd.o 00:02:43.422 CC lib/json/json_parse.o 00:02:43.422 CC lib/idxd/idxd.o 00:02:43.422 CC lib/rdma_utils/rdma_utils.o 00:02:43.422 CC lib/vmd/led.o 00:02:43.422 CC lib/env_dpdk/env.o 00:02:43.422 CC lib/json/json_util.o 00:02:43.422 CC lib/idxd/idxd_user.o 00:02:43.422 CC lib/json/json_write.o 00:02:43.422 CC lib/env_dpdk/memory.o 00:02:43.422 CC lib/idxd/idxd_kernel.o 00:02:43.422 CC lib/env_dpdk/pci.o 00:02:43.422 CC lib/env_dpdk/init.o 00:02:43.422 CC lib/env_dpdk/threads.o 00:02:43.422 CC lib/env_dpdk/pci_ioat.o 00:02:43.422 CC lib/env_dpdk/pci_virtio.o 00:02:43.422 CC lib/env_dpdk/pci_vmd.o 00:02:43.422 CC lib/env_dpdk/pci_idxd.o 00:02:43.422 CC lib/env_dpdk/pci_event.o 00:02:43.422 CC lib/env_dpdk/sigbus_handler.o 00:02:43.422 CC lib/env_dpdk/pci_dpdk.o 00:02:43.422 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:43.422 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:43.680 LIB libspdk_trace_parser.a 00:02:43.680 SO libspdk_trace_parser.so.6.0 00:02:43.680 SYMLINK libspdk_trace_parser.so 00:02:43.680 LIB libspdk_conf.a 00:02:43.680 SO libspdk_conf.so.6.0 00:02:43.942 LIB libspdk_rdma_utils.a 00:02:43.942 SYMLINK libspdk_conf.so 00:02:43.942 SO libspdk_rdma_utils.so.1.0 00:02:43.942 LIB libspdk_json.a 00:02:43.942 SYMLINK libspdk_rdma_utils.so 00:02:43.942 SO libspdk_json.so.6.0 00:02:43.942 SYMLINK libspdk_json.so 00:02:43.942 CC lib/rdma_provider/common.o 00:02:43.942 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:44.202 CC lib/jsonrpc/jsonrpc_server.o 00:02:44.202 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:44.202 CC lib/jsonrpc/jsonrpc_client.o 00:02:44.202 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:44.202 LIB libspdk_rdma_provider.a 00:02:44.202 LIB libspdk_idxd.a 00:02:44.460 SO libspdk_rdma_provider.so.7.0 00:02:44.460 SO libspdk_idxd.so.12.1 00:02:44.460 SYMLINK libspdk_rdma_provider.so 00:02:44.460 LIB libspdk_vmd.a 00:02:44.460 SYMLINK libspdk_idxd.so 00:02:44.460 SO libspdk_vmd.so.6.0 00:02:44.460 LIB libspdk_jsonrpc.a 00:02:44.460 SO libspdk_jsonrpc.so.6.0 00:02:44.460 SYMLINK libspdk_vmd.so 00:02:44.460 SYMLINK libspdk_jsonrpc.so 00:02:44.719 CC lib/rpc/rpc.o 00:02:44.978 LIB libspdk_rpc.a 00:02:44.978 SO libspdk_rpc.so.6.0 00:02:44.978 SYMLINK libspdk_rpc.so 00:02:45.237 CC lib/trace/trace.o 00:02:45.237 CC lib/keyring/keyring.o 00:02:45.237 CC lib/notify/notify.o 00:02:45.237 CC lib/trace/trace_flags.o 00:02:45.237 CC lib/keyring/keyring_rpc.o 00:02:45.237 CC lib/notify/notify_rpc.o 00:02:45.237 CC lib/trace/trace_rpc.o 00:02:45.237 LIB libspdk_notify.a 00:02:45.495 SO libspdk_notify.so.6.0 00:02:45.495 SYMLINK libspdk_notify.so 00:02:45.495 LIB libspdk_keyring.a 00:02:45.495 SO libspdk_keyring.so.2.0 00:02:45.495 LIB libspdk_trace.a 00:02:45.495 SO libspdk_trace.so.11.0 00:02:45.495 SYMLINK libspdk_keyring.so 00:02:45.495 SYMLINK libspdk_trace.so 00:02:45.753 CC lib/sock/sock.o 00:02:45.753 CC lib/sock/sock_rpc.o 00:02:45.753 CC lib/thread/thread.o 00:02:45.753 CC lib/thread/iobuf.o 00:02:46.320 LIB libspdk_sock.a 00:02:46.320 SO libspdk_sock.so.10.0 00:02:46.320 SYMLINK libspdk_sock.so 00:02:46.578 LIB libspdk_env_dpdk.a 00:02:46.578 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:46.578 CC lib/nvme/nvme_ctrlr.o 00:02:46.578 CC lib/nvme/nvme_fabric.o 00:02:46.578 CC lib/nvme/nvme_ns_cmd.o 00:02:46.578 CC lib/nvme/nvme_ns.o 00:02:46.578 CC lib/nvme/nvme_pcie_common.o 00:02:46.578 CC lib/nvme/nvme_pcie.o 00:02:46.578 CC lib/nvme/nvme_qpair.o 00:02:46.578 CC lib/nvme/nvme.o 00:02:46.578 CC lib/nvme/nvme_transport.o 00:02:46.578 CC lib/nvme/nvme_quirks.o 00:02:46.578 CC lib/nvme/nvme_discovery.o 00:02:46.578 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:46.578 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:46.578 CC lib/nvme/nvme_tcp.o 00:02:46.578 CC lib/nvme/nvme_opal.o 00:02:46.578 CC lib/nvme/nvme_io_msg.o 00:02:46.578 CC lib/nvme/nvme_poll_group.o 00:02:46.578 CC lib/nvme/nvme_zns.o 00:02:46.578 CC lib/nvme/nvme_stubs.o 00:02:46.578 CC lib/nvme/nvme_auth.o 00:02:46.578 CC lib/nvme/nvme_cuse.o 00:02:46.578 CC lib/nvme/nvme_rdma.o 00:02:46.578 SO libspdk_env_dpdk.so.15.1 00:02:46.837 SYMLINK libspdk_env_dpdk.so 00:02:47.774 LIB libspdk_thread.a 00:02:47.774 SO libspdk_thread.so.11.0 00:02:48.032 SYMLINK libspdk_thread.so 00:02:48.032 CC lib/virtio/virtio.o 00:02:48.032 CC lib/blob/blobstore.o 00:02:48.032 CC lib/fsdev/fsdev.o 00:02:48.032 CC lib/virtio/virtio_vhost_user.o 00:02:48.032 CC lib/init/json_config.o 00:02:48.032 CC lib/blob/request.o 00:02:48.032 CC lib/fsdev/fsdev_io.o 00:02:48.032 CC lib/accel/accel.o 00:02:48.032 CC lib/blob/zeroes.o 00:02:48.032 CC lib/init/subsystem.o 00:02:48.032 CC lib/virtio/virtio_vfio_user.o 00:02:48.032 CC lib/fsdev/fsdev_rpc.o 00:02:48.032 CC lib/accel/accel_rpc.o 00:02:48.032 CC lib/init/subsystem_rpc.o 00:02:48.032 CC lib/virtio/virtio_pci.o 00:02:48.032 CC lib/accel/accel_sw.o 00:02:48.032 CC lib/blob/blob_bs_dev.o 00:02:48.032 CC lib/init/rpc.o 00:02:48.599 LIB libspdk_init.a 00:02:48.599 SO libspdk_init.so.6.0 00:02:48.599 SYMLINK libspdk_init.so 00:02:48.599 LIB libspdk_virtio.a 00:02:48.599 SO libspdk_virtio.so.7.0 00:02:48.599 SYMLINK libspdk_virtio.so 00:02:48.599 CC lib/event/app.o 00:02:48.599 CC lib/event/reactor.o 00:02:48.599 CC lib/event/log_rpc.o 00:02:48.599 CC lib/event/app_rpc.o 00:02:48.599 CC lib/event/scheduler_static.o 00:02:48.857 LIB libspdk_fsdev.a 00:02:49.116 SO libspdk_fsdev.so.2.0 00:02:49.116 SYMLINK libspdk_fsdev.so 00:02:49.116 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:49.374 LIB libspdk_event.a 00:02:49.374 SO libspdk_event.so.14.0 00:02:49.374 SYMLINK libspdk_event.so 00:02:49.632 LIB libspdk_nvme.a 00:02:49.632 LIB libspdk_accel.a 00:02:49.632 SO libspdk_accel.so.16.0 00:02:49.632 SO libspdk_nvme.so.15.0 00:02:49.632 SYMLINK libspdk_accel.so 00:02:49.891 CC lib/bdev/bdev.o 00:02:49.891 CC lib/bdev/bdev_rpc.o 00:02:49.891 CC lib/bdev/bdev_zone.o 00:02:49.891 CC lib/bdev/part.o 00:02:49.891 CC lib/bdev/scsi_nvme.o 00:02:49.891 SYMLINK libspdk_nvme.so 00:02:50.150 LIB libspdk_fuse_dispatcher.a 00:02:50.150 SO libspdk_fuse_dispatcher.so.1.0 00:02:50.150 SYMLINK libspdk_fuse_dispatcher.so 00:02:52.681 LIB libspdk_blob.a 00:02:52.681 SO libspdk_blob.so.11.0 00:02:52.681 SYMLINK libspdk_blob.so 00:02:52.939 CC lib/blobfs/blobfs.o 00:02:52.939 CC lib/blobfs/tree.o 00:02:52.939 CC lib/lvol/lvol.o 00:02:53.197 LIB libspdk_bdev.a 00:02:53.197 SO libspdk_bdev.so.17.0 00:02:53.457 SYMLINK libspdk_bdev.so 00:02:53.457 CC lib/nbd/nbd.o 00:02:53.457 CC lib/scsi/dev.o 00:02:53.457 CC lib/nvmf/ctrlr.o 00:02:53.457 CC lib/nbd/nbd_rpc.o 00:02:53.457 CC lib/ublk/ublk.o 00:02:53.457 CC lib/nvmf/ctrlr_discovery.o 00:02:53.457 CC lib/ftl/ftl_core.o 00:02:53.457 CC lib/scsi/lun.o 00:02:53.457 CC lib/ublk/ublk_rpc.o 00:02:53.457 CC lib/ftl/ftl_init.o 00:02:53.457 CC lib/nvmf/ctrlr_bdev.o 00:02:53.457 CC lib/scsi/port.o 00:02:53.457 CC lib/ftl/ftl_layout.o 00:02:53.457 CC lib/nvmf/subsystem.o 00:02:53.457 CC lib/scsi/scsi.o 00:02:53.457 CC lib/nvmf/nvmf.o 00:02:53.457 CC lib/ftl/ftl_debug.o 00:02:53.457 CC lib/scsi/scsi_bdev.o 00:02:53.457 CC lib/nvmf/nvmf_rpc.o 00:02:53.457 CC lib/ftl/ftl_io.o 00:02:53.457 CC lib/scsi/scsi_rpc.o 00:02:53.457 CC lib/scsi/scsi_pr.o 00:02:53.457 CC lib/nvmf/tcp.o 00:02:53.457 CC lib/nvmf/transport.o 00:02:53.457 CC lib/scsi/task.o 00:02:53.457 CC lib/ftl/ftl_sb.o 00:02:53.457 CC lib/nvmf/stubs.o 00:02:53.457 CC lib/ftl/ftl_l2p.o 00:02:53.457 CC lib/ftl/ftl_l2p_flat.o 00:02:53.457 CC lib/nvmf/mdns_server.o 00:02:53.457 CC lib/nvmf/rdma.o 00:02:53.457 CC lib/ftl/ftl_nv_cache.o 00:02:53.457 CC lib/nvmf/auth.o 00:02:53.457 CC lib/ftl/ftl_band.o 00:02:53.457 CC lib/ftl/ftl_band_ops.o 00:02:53.457 CC lib/ftl/ftl_writer.o 00:02:53.457 CC lib/ftl/ftl_rq.o 00:02:53.457 CC lib/ftl/ftl_reloc.o 00:02:53.721 CC lib/ftl/ftl_l2p_cache.o 00:02:53.721 CC lib/ftl/ftl_p2l.o 00:02:53.721 CC lib/ftl/ftl_p2l_log.o 00:02:53.721 CC lib/ftl/mngt/ftl_mngt.o 00:02:53.721 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:53.721 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:53.721 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:53.721 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:53.984 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:53.984 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:53.984 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:53.984 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:53.984 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:53.984 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:53.984 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:53.984 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:53.984 CC lib/ftl/utils/ftl_conf.o 00:02:53.984 CC lib/ftl/utils/ftl_md.o 00:02:53.984 CC lib/ftl/utils/ftl_mempool.o 00:02:53.984 CC lib/ftl/utils/ftl_bitmap.o 00:02:53.984 CC lib/ftl/utils/ftl_property.o 00:02:54.244 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:54.244 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:54.244 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:54.244 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:54.244 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:54.244 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:54.244 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:54.244 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:54.244 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:54.244 LIB libspdk_blobfs.a 00:02:54.506 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:54.506 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:54.506 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:54.506 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:54.506 SO libspdk_blobfs.so.10.0 00:02:54.506 CC lib/ftl/base/ftl_base_dev.o 00:02:54.506 CC lib/ftl/base/ftl_base_bdev.o 00:02:54.506 CC lib/ftl/ftl_trace.o 00:02:54.506 SYMLINK libspdk_blobfs.so 00:02:54.506 LIB libspdk_nbd.a 00:02:54.765 SO libspdk_nbd.so.7.0 00:02:54.765 SYMLINK libspdk_nbd.so 00:02:54.765 LIB libspdk_lvol.a 00:02:54.765 LIB libspdk_scsi.a 00:02:54.765 SO libspdk_lvol.so.10.0 00:02:54.765 SO libspdk_scsi.so.9.0 00:02:54.765 SYMLINK libspdk_lvol.so 00:02:54.765 SYMLINK libspdk_scsi.so 00:02:55.023 LIB libspdk_ublk.a 00:02:55.023 CC lib/iscsi/conn.o 00:02:55.023 CC lib/iscsi/init_grp.o 00:02:55.023 CC lib/iscsi/iscsi.o 00:02:55.023 CC lib/iscsi/param.o 00:02:55.023 CC lib/iscsi/portal_grp.o 00:02:55.023 CC lib/iscsi/tgt_node.o 00:02:55.023 CC lib/iscsi/iscsi_subsystem.o 00:02:55.023 CC lib/vhost/vhost.o 00:02:55.023 CC lib/iscsi/iscsi_rpc.o 00:02:55.023 CC lib/iscsi/task.o 00:02:55.023 CC lib/vhost/vhost_rpc.o 00:02:55.023 CC lib/vhost/vhost_scsi.o 00:02:55.023 CC lib/vhost/vhost_blk.o 00:02:55.023 CC lib/vhost/rte_vhost_user.o 00:02:55.023 SO libspdk_ublk.so.3.0 00:02:55.023 SYMLINK libspdk_ublk.so 00:02:55.608 LIB libspdk_ftl.a 00:02:55.608 SO libspdk_ftl.so.9.0 00:02:55.868 SYMLINK libspdk_ftl.so 00:02:56.433 LIB libspdk_vhost.a 00:02:56.433 SO libspdk_vhost.so.8.0 00:02:56.691 SYMLINK libspdk_vhost.so 00:02:56.950 LIB libspdk_iscsi.a 00:02:56.950 SO libspdk_iscsi.so.8.0 00:02:57.208 LIB libspdk_nvmf.a 00:02:57.208 SYMLINK libspdk_iscsi.so 00:02:57.208 SO libspdk_nvmf.so.20.0 00:02:57.466 SYMLINK libspdk_nvmf.so 00:02:57.725 CC module/env_dpdk/env_dpdk_rpc.o 00:02:57.725 CC module/blob/bdev/blob_bdev.o 00:02:57.725 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:57.725 CC module/fsdev/aio/fsdev_aio.o 00:02:57.725 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:57.725 CC module/accel/iaa/accel_iaa.o 00:02:57.725 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:57.725 CC module/sock/posix/posix.o 00:02:57.725 CC module/accel/iaa/accel_iaa_rpc.o 00:02:57.725 CC module/fsdev/aio/linux_aio_mgr.o 00:02:57.725 CC module/accel/error/accel_error.o 00:02:57.725 CC module/scheduler/gscheduler/gscheduler.o 00:02:57.725 CC module/accel/ioat/accel_ioat.o 00:02:57.725 CC module/keyring/linux/keyring.o 00:02:57.725 CC module/accel/dsa/accel_dsa.o 00:02:57.725 CC module/accel/error/accel_error_rpc.o 00:02:57.725 CC module/accel/ioat/accel_ioat_rpc.o 00:02:57.725 CC module/keyring/file/keyring.o 00:02:57.725 CC module/keyring/linux/keyring_rpc.o 00:02:57.725 CC module/accel/dsa/accel_dsa_rpc.o 00:02:57.725 CC module/keyring/file/keyring_rpc.o 00:02:57.725 LIB libspdk_env_dpdk_rpc.a 00:02:57.983 SO libspdk_env_dpdk_rpc.so.6.0 00:02:57.983 SYMLINK libspdk_env_dpdk_rpc.so 00:02:57.983 LIB libspdk_keyring_linux.a 00:02:57.983 LIB libspdk_scheduler_dpdk_governor.a 00:02:57.983 LIB libspdk_keyring_file.a 00:02:57.983 LIB libspdk_scheduler_gscheduler.a 00:02:57.983 SO libspdk_keyring_linux.so.1.0 00:02:57.983 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:57.983 SO libspdk_keyring_file.so.2.0 00:02:57.983 SO libspdk_scheduler_gscheduler.so.4.0 00:02:57.983 LIB libspdk_accel_ioat.a 00:02:57.983 LIB libspdk_scheduler_dynamic.a 00:02:57.983 SYMLINK libspdk_keyring_linux.so 00:02:57.983 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:57.983 LIB libspdk_accel_error.a 00:02:57.983 LIB libspdk_accel_iaa.a 00:02:57.983 SO libspdk_accel_ioat.so.6.0 00:02:57.983 SYMLINK libspdk_scheduler_gscheduler.so 00:02:57.983 SYMLINK libspdk_keyring_file.so 00:02:57.983 SO libspdk_scheduler_dynamic.so.4.0 00:02:57.983 SO libspdk_accel_error.so.2.0 00:02:57.983 SO libspdk_accel_iaa.so.3.0 00:02:57.983 SYMLINK libspdk_accel_ioat.so 00:02:57.983 SYMLINK libspdk_scheduler_dynamic.so 00:02:57.983 SYMLINK libspdk_accel_error.so 00:02:58.241 SYMLINK libspdk_accel_iaa.so 00:02:58.241 LIB libspdk_blob_bdev.a 00:02:58.241 SO libspdk_blob_bdev.so.11.0 00:02:58.241 LIB libspdk_accel_dsa.a 00:02:58.241 SO libspdk_accel_dsa.so.5.0 00:02:58.241 SYMLINK libspdk_blob_bdev.so 00:02:58.241 SYMLINK libspdk_accel_dsa.so 00:02:58.505 CC module/bdev/gpt/gpt.o 00:02:58.505 CC module/bdev/aio/bdev_aio.o 00:02:58.505 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:58.505 CC module/bdev/gpt/vbdev_gpt.o 00:02:58.505 CC module/blobfs/bdev/blobfs_bdev.o 00:02:58.505 CC module/bdev/aio/bdev_aio_rpc.o 00:02:58.505 CC module/bdev/delay/vbdev_delay.o 00:02:58.505 CC module/bdev/null/bdev_null.o 00:02:58.505 CC module/bdev/split/vbdev_split.o 00:02:58.505 CC module/bdev/nvme/bdev_nvme.o 00:02:58.505 CC module/bdev/error/vbdev_error.o 00:02:58.505 CC module/bdev/null/bdev_null_rpc.o 00:02:58.505 CC module/bdev/split/vbdev_split_rpc.o 00:02:58.505 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:58.505 CC module/bdev/lvol/vbdev_lvol.o 00:02:58.505 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:58.505 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:58.505 CC module/bdev/raid/bdev_raid.o 00:02:58.505 CC module/bdev/malloc/bdev_malloc.o 00:02:58.505 CC module/bdev/error/vbdev_error_rpc.o 00:02:58.505 CC module/bdev/nvme/nvme_rpc.o 00:02:58.505 CC module/bdev/passthru/vbdev_passthru.o 00:02:58.505 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:58.505 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:58.505 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:58.505 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:58.505 CC module/bdev/nvme/bdev_mdns_client.o 00:02:58.505 CC module/bdev/iscsi/bdev_iscsi.o 00:02:58.505 CC module/bdev/raid/bdev_raid_rpc.o 00:02:58.505 CC module/bdev/ftl/bdev_ftl.o 00:02:58.505 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:58.505 CC module/bdev/nvme/vbdev_opal.o 00:02:58.505 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:58.505 CC module/bdev/raid/bdev_raid_sb.o 00:02:58.505 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:58.505 CC module/bdev/raid/raid0.o 00:02:58.505 CC module/bdev/raid/raid1.o 00:02:58.505 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:58.505 CC module/bdev/raid/concat.o 00:02:58.505 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:58.505 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:58.505 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:58.765 LIB libspdk_blobfs_bdev.a 00:02:58.765 SO libspdk_blobfs_bdev.so.6.0 00:02:58.765 LIB libspdk_fsdev_aio.a 00:02:58.765 LIB libspdk_bdev_split.a 00:02:59.023 SO libspdk_fsdev_aio.so.1.0 00:02:59.023 SYMLINK libspdk_blobfs_bdev.so 00:02:59.023 SO libspdk_bdev_split.so.6.0 00:02:59.023 LIB libspdk_sock_posix.a 00:02:59.023 SO libspdk_sock_posix.so.6.0 00:02:59.023 LIB libspdk_bdev_error.a 00:02:59.023 SYMLINK libspdk_fsdev_aio.so 00:02:59.023 SYMLINK libspdk_bdev_split.so 00:02:59.023 SO libspdk_bdev_error.so.6.0 00:02:59.023 LIB libspdk_bdev_gpt.a 00:02:59.023 SYMLINK libspdk_sock_posix.so 00:02:59.023 SO libspdk_bdev_gpt.so.6.0 00:02:59.023 LIB libspdk_bdev_passthru.a 00:02:59.023 LIB libspdk_bdev_null.a 00:02:59.023 SYMLINK libspdk_bdev_error.so 00:02:59.023 SO libspdk_bdev_null.so.6.0 00:02:59.023 SO libspdk_bdev_passthru.so.6.0 00:02:59.023 LIB libspdk_bdev_ftl.a 00:02:59.023 LIB libspdk_bdev_zone_block.a 00:02:59.023 LIB libspdk_bdev_aio.a 00:02:59.023 SYMLINK libspdk_bdev_gpt.so 00:02:59.023 SO libspdk_bdev_zone_block.so.6.0 00:02:59.023 SO libspdk_bdev_ftl.so.6.0 00:02:59.023 SO libspdk_bdev_aio.so.6.0 00:02:59.023 SYMLINK libspdk_bdev_passthru.so 00:02:59.023 SYMLINK libspdk_bdev_null.so 00:02:59.281 LIB libspdk_bdev_iscsi.a 00:02:59.281 SYMLINK libspdk_bdev_zone_block.so 00:02:59.281 SYMLINK libspdk_bdev_ftl.so 00:02:59.281 SYMLINK libspdk_bdev_aio.so 00:02:59.281 SO libspdk_bdev_iscsi.so.6.0 00:02:59.281 LIB libspdk_bdev_delay.a 00:02:59.281 LIB libspdk_bdev_malloc.a 00:02:59.281 SO libspdk_bdev_delay.so.6.0 00:02:59.281 SO libspdk_bdev_malloc.so.6.0 00:02:59.281 SYMLINK libspdk_bdev_iscsi.so 00:02:59.281 SYMLINK libspdk_bdev_delay.so 00:02:59.281 SYMLINK libspdk_bdev_malloc.so 00:02:59.281 LIB libspdk_bdev_lvol.a 00:02:59.281 LIB libspdk_bdev_virtio.a 00:02:59.281 SO libspdk_bdev_lvol.so.6.0 00:02:59.281 SO libspdk_bdev_virtio.so.6.0 00:02:59.281 SYMLINK libspdk_bdev_lvol.so 00:02:59.539 SYMLINK libspdk_bdev_virtio.so 00:03:00.106 LIB libspdk_bdev_raid.a 00:03:00.107 SO libspdk_bdev_raid.so.6.0 00:03:00.107 SYMLINK libspdk_bdev_raid.so 00:03:02.009 LIB libspdk_bdev_nvme.a 00:03:02.009 SO libspdk_bdev_nvme.so.7.1 00:03:02.009 SYMLINK libspdk_bdev_nvme.so 00:03:02.575 CC module/event/subsystems/vmd/vmd.o 00:03:02.575 CC module/event/subsystems/iobuf/iobuf.o 00:03:02.575 CC module/event/subsystems/sock/sock.o 00:03:02.575 CC module/event/subsystems/keyring/keyring.o 00:03:02.575 CC module/event/subsystems/fsdev/fsdev.o 00:03:02.575 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:02.575 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:02.575 CC module/event/subsystems/scheduler/scheduler.o 00:03:02.576 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:02.576 LIB libspdk_event_keyring.a 00:03:02.576 LIB libspdk_event_vhost_blk.a 00:03:02.576 LIB libspdk_event_fsdev.a 00:03:02.576 LIB libspdk_event_scheduler.a 00:03:02.576 LIB libspdk_event_vmd.a 00:03:02.576 LIB libspdk_event_sock.a 00:03:02.576 SO libspdk_event_keyring.so.1.0 00:03:02.576 SO libspdk_event_vhost_blk.so.3.0 00:03:02.576 SO libspdk_event_fsdev.so.1.0 00:03:02.576 LIB libspdk_event_iobuf.a 00:03:02.576 SO libspdk_event_scheduler.so.4.0 00:03:02.576 SO libspdk_event_sock.so.5.0 00:03:02.576 SO libspdk_event_vmd.so.6.0 00:03:02.576 SO libspdk_event_iobuf.so.3.0 00:03:02.834 SYMLINK libspdk_event_keyring.so 00:03:02.834 SYMLINK libspdk_event_vhost_blk.so 00:03:02.834 SYMLINK libspdk_event_fsdev.so 00:03:02.834 SYMLINK libspdk_event_sock.so 00:03:02.834 SYMLINK libspdk_event_scheduler.so 00:03:02.834 SYMLINK libspdk_event_vmd.so 00:03:02.834 SYMLINK libspdk_event_iobuf.so 00:03:02.834 CC module/event/subsystems/accel/accel.o 00:03:03.093 LIB libspdk_event_accel.a 00:03:03.093 SO libspdk_event_accel.so.6.0 00:03:03.093 SYMLINK libspdk_event_accel.so 00:03:03.351 CC module/event/subsystems/bdev/bdev.o 00:03:03.609 LIB libspdk_event_bdev.a 00:03:03.609 SO libspdk_event_bdev.so.6.0 00:03:03.609 SYMLINK libspdk_event_bdev.so 00:03:03.866 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:03.866 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:03.866 CC module/event/subsystems/nbd/nbd.o 00:03:03.866 CC module/event/subsystems/scsi/scsi.o 00:03:03.866 CC module/event/subsystems/ublk/ublk.o 00:03:03.866 LIB libspdk_event_ublk.a 00:03:03.866 LIB libspdk_event_nbd.a 00:03:03.866 LIB libspdk_event_scsi.a 00:03:03.866 SO libspdk_event_ublk.so.3.0 00:03:03.866 SO libspdk_event_nbd.so.6.0 00:03:03.866 SO libspdk_event_scsi.so.6.0 00:03:03.866 SYMLINK libspdk_event_ublk.so 00:03:03.866 SYMLINK libspdk_event_nbd.so 00:03:03.866 SYMLINK libspdk_event_scsi.so 00:03:04.124 LIB libspdk_event_nvmf.a 00:03:04.124 SO libspdk_event_nvmf.so.6.0 00:03:04.124 SYMLINK libspdk_event_nvmf.so 00:03:04.124 CC module/event/subsystems/iscsi/iscsi.o 00:03:04.124 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:04.382 LIB libspdk_event_vhost_scsi.a 00:03:04.382 SO libspdk_event_vhost_scsi.so.3.0 00:03:04.382 LIB libspdk_event_iscsi.a 00:03:04.382 SO libspdk_event_iscsi.so.6.0 00:03:04.382 SYMLINK libspdk_event_vhost_scsi.so 00:03:04.382 SYMLINK libspdk_event_iscsi.so 00:03:04.670 SO libspdk.so.6.0 00:03:04.670 SYMLINK libspdk.so 00:03:04.670 TEST_HEADER include/spdk/accel.h 00:03:04.670 CC app/spdk_nvme_identify/identify.o 00:03:04.670 TEST_HEADER include/spdk/barrier.h 00:03:04.670 TEST_HEADER include/spdk/accel_module.h 00:03:04.670 TEST_HEADER include/spdk/assert.h 00:03:04.670 TEST_HEADER include/spdk/base64.h 00:03:04.670 CXX app/trace/trace.o 00:03:04.670 CC test/rpc_client/rpc_client_test.o 00:03:04.670 TEST_HEADER include/spdk/bdev.h 00:03:04.670 CC app/trace_record/trace_record.o 00:03:04.670 CC app/spdk_nvme_discover/discovery_aer.o 00:03:04.670 TEST_HEADER include/spdk/bdev_module.h 00:03:04.670 CC app/spdk_nvme_perf/perf.o 00:03:04.670 CC app/spdk_top/spdk_top.o 00:03:04.670 TEST_HEADER include/spdk/bdev_zone.h 00:03:04.670 CC app/spdk_lspci/spdk_lspci.o 00:03:04.670 TEST_HEADER include/spdk/bit_array.h 00:03:04.670 TEST_HEADER include/spdk/bit_pool.h 00:03:04.670 TEST_HEADER include/spdk/blob_bdev.h 00:03:04.670 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:04.670 TEST_HEADER include/spdk/blobfs.h 00:03:04.670 TEST_HEADER include/spdk/blob.h 00:03:04.670 TEST_HEADER include/spdk/conf.h 00:03:04.670 TEST_HEADER include/spdk/config.h 00:03:04.670 TEST_HEADER include/spdk/cpuset.h 00:03:04.670 TEST_HEADER include/spdk/crc16.h 00:03:04.670 TEST_HEADER include/spdk/crc32.h 00:03:04.670 TEST_HEADER include/spdk/crc64.h 00:03:04.670 TEST_HEADER include/spdk/dif.h 00:03:04.670 TEST_HEADER include/spdk/dma.h 00:03:04.670 TEST_HEADER include/spdk/endian.h 00:03:04.670 TEST_HEADER include/spdk/env_dpdk.h 00:03:04.670 TEST_HEADER include/spdk/env.h 00:03:04.670 TEST_HEADER include/spdk/fd_group.h 00:03:04.670 TEST_HEADER include/spdk/event.h 00:03:04.670 TEST_HEADER include/spdk/fd.h 00:03:04.670 TEST_HEADER include/spdk/file.h 00:03:04.670 TEST_HEADER include/spdk/fsdev.h 00:03:04.670 TEST_HEADER include/spdk/fsdev_module.h 00:03:04.670 TEST_HEADER include/spdk/ftl.h 00:03:04.670 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:04.670 TEST_HEADER include/spdk/gpt_spec.h 00:03:04.670 TEST_HEADER include/spdk/hexlify.h 00:03:04.670 TEST_HEADER include/spdk/histogram_data.h 00:03:04.670 TEST_HEADER include/spdk/idxd.h 00:03:04.670 TEST_HEADER include/spdk/idxd_spec.h 00:03:04.670 TEST_HEADER include/spdk/ioat.h 00:03:04.670 TEST_HEADER include/spdk/init.h 00:03:04.670 TEST_HEADER include/spdk/iscsi_spec.h 00:03:04.670 TEST_HEADER include/spdk/ioat_spec.h 00:03:04.670 TEST_HEADER include/spdk/json.h 00:03:04.670 TEST_HEADER include/spdk/jsonrpc.h 00:03:04.670 TEST_HEADER include/spdk/keyring.h 00:03:04.670 TEST_HEADER include/spdk/keyring_module.h 00:03:04.670 TEST_HEADER include/spdk/likely.h 00:03:04.670 TEST_HEADER include/spdk/log.h 00:03:04.670 TEST_HEADER include/spdk/lvol.h 00:03:04.670 TEST_HEADER include/spdk/md5.h 00:03:04.670 TEST_HEADER include/spdk/memory.h 00:03:04.670 TEST_HEADER include/spdk/nbd.h 00:03:04.670 TEST_HEADER include/spdk/mmio.h 00:03:04.670 TEST_HEADER include/spdk/net.h 00:03:04.670 TEST_HEADER include/spdk/notify.h 00:03:04.670 TEST_HEADER include/spdk/nvme.h 00:03:04.670 TEST_HEADER include/spdk/nvme_intel.h 00:03:04.670 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:04.670 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:04.670 TEST_HEADER include/spdk/nvme_spec.h 00:03:04.670 TEST_HEADER include/spdk/nvme_zns.h 00:03:04.670 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:04.670 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:04.670 TEST_HEADER include/spdk/nvmf.h 00:03:04.670 TEST_HEADER include/spdk/nvmf_spec.h 00:03:04.670 TEST_HEADER include/spdk/nvmf_transport.h 00:03:04.670 TEST_HEADER include/spdk/opal.h 00:03:04.670 TEST_HEADER include/spdk/opal_spec.h 00:03:04.670 TEST_HEADER include/spdk/pci_ids.h 00:03:04.670 TEST_HEADER include/spdk/queue.h 00:03:04.670 TEST_HEADER include/spdk/pipe.h 00:03:04.670 TEST_HEADER include/spdk/reduce.h 00:03:04.670 TEST_HEADER include/spdk/rpc.h 00:03:04.670 TEST_HEADER include/spdk/scheduler.h 00:03:04.670 TEST_HEADER include/spdk/scsi.h 00:03:04.670 TEST_HEADER include/spdk/scsi_spec.h 00:03:04.670 TEST_HEADER include/spdk/sock.h 00:03:04.671 TEST_HEADER include/spdk/stdinc.h 00:03:04.671 TEST_HEADER include/spdk/thread.h 00:03:04.671 TEST_HEADER include/spdk/string.h 00:03:04.671 TEST_HEADER include/spdk/trace.h 00:03:04.671 TEST_HEADER include/spdk/trace_parser.h 00:03:04.671 TEST_HEADER include/spdk/tree.h 00:03:04.671 TEST_HEADER include/spdk/ublk.h 00:03:04.671 TEST_HEADER include/spdk/util.h 00:03:04.671 TEST_HEADER include/spdk/uuid.h 00:03:04.671 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:04.671 TEST_HEADER include/spdk/version.h 00:03:04.671 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:04.671 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:04.671 TEST_HEADER include/spdk/vhost.h 00:03:04.671 TEST_HEADER include/spdk/vmd.h 00:03:04.671 TEST_HEADER include/spdk/xor.h 00:03:04.671 TEST_HEADER include/spdk/zipf.h 00:03:04.671 CXX test/cpp_headers/accel.o 00:03:04.671 CXX test/cpp_headers/accel_module.o 00:03:04.671 CXX test/cpp_headers/assert.o 00:03:04.671 CXX test/cpp_headers/barrier.o 00:03:04.671 CXX test/cpp_headers/base64.o 00:03:04.671 CXX test/cpp_headers/bdev.o 00:03:04.671 CXX test/cpp_headers/bdev_zone.o 00:03:04.671 CXX test/cpp_headers/bdev_module.o 00:03:04.671 CXX test/cpp_headers/bit_array.o 00:03:04.671 CXX test/cpp_headers/bit_pool.o 00:03:04.671 CXX test/cpp_headers/blob_bdev.o 00:03:04.671 CXX test/cpp_headers/blobfs_bdev.o 00:03:04.671 CXX test/cpp_headers/blobfs.o 00:03:04.671 CXX test/cpp_headers/blob.o 00:03:04.671 CXX test/cpp_headers/conf.o 00:03:04.671 CXX test/cpp_headers/config.o 00:03:04.671 CXX test/cpp_headers/cpuset.o 00:03:04.671 CXX test/cpp_headers/crc16.o 00:03:04.671 CC app/spdk_dd/spdk_dd.o 00:03:04.971 CC app/iscsi_tgt/iscsi_tgt.o 00:03:04.971 CC app/nvmf_tgt/nvmf_main.o 00:03:04.971 CC examples/ioat/perf/perf.o 00:03:04.971 CC test/app/jsoncat/jsoncat.o 00:03:04.971 CC test/thread/poller_perf/poller_perf.o 00:03:04.971 CC app/spdk_tgt/spdk_tgt.o 00:03:04.971 CC test/app/histogram_perf/histogram_perf.o 00:03:04.971 CC test/env/pci/pci_ut.o 00:03:04.971 CC app/fio/nvme/fio_plugin.o 00:03:04.971 CC test/env/memory/memory_ut.o 00:03:04.971 CXX test/cpp_headers/crc32.o 00:03:04.971 CC examples/util/zipf/zipf.o 00:03:04.971 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:04.971 CC test/app/stub/stub.o 00:03:04.971 CC test/env/vtophys/vtophys.o 00:03:04.971 CC examples/ioat/verify/verify.o 00:03:04.971 CC test/dma/test_dma/test_dma.o 00:03:04.971 CC app/fio/bdev/fio_plugin.o 00:03:04.971 CC test/app/bdev_svc/bdev_svc.o 00:03:04.971 LINK spdk_lspci 00:03:04.971 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:04.971 CC test/env/mem_callbacks/mem_callbacks.o 00:03:05.239 LINK jsoncat 00:03:05.239 LINK rpc_client_test 00:03:05.239 LINK poller_perf 00:03:05.239 LINK spdk_nvme_discover 00:03:05.239 LINK histogram_perf 00:03:05.239 CXX test/cpp_headers/crc64.o 00:03:05.239 CXX test/cpp_headers/dif.o 00:03:05.239 LINK zipf 00:03:05.239 LINK interrupt_tgt 00:03:05.239 CXX test/cpp_headers/dma.o 00:03:05.239 LINK vtophys 00:03:05.239 LINK env_dpdk_post_init 00:03:05.239 CXX test/cpp_headers/endian.o 00:03:05.239 CXX test/cpp_headers/env_dpdk.o 00:03:05.239 LINK nvmf_tgt 00:03:05.239 CXX test/cpp_headers/env.o 00:03:05.239 CXX test/cpp_headers/event.o 00:03:05.239 CXX test/cpp_headers/fd_group.o 00:03:05.239 CXX test/cpp_headers/fd.o 00:03:05.239 LINK iscsi_tgt 00:03:05.239 CXX test/cpp_headers/file.o 00:03:05.239 CXX test/cpp_headers/fsdev.o 00:03:05.239 CXX test/cpp_headers/fsdev_module.o 00:03:05.239 CXX test/cpp_headers/ftl.o 00:03:05.239 CXX test/cpp_headers/fuse_dispatcher.o 00:03:05.239 CXX test/cpp_headers/gpt_spec.o 00:03:05.239 LINK spdk_tgt 00:03:05.239 LINK stub 00:03:05.239 CXX test/cpp_headers/hexlify.o 00:03:05.239 LINK bdev_svc 00:03:05.502 LINK ioat_perf 00:03:05.502 CXX test/cpp_headers/histogram_data.o 00:03:05.502 LINK spdk_trace_record 00:03:05.502 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:05.502 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:05.502 LINK verify 00:03:05.502 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:05.502 CXX test/cpp_headers/idxd.o 00:03:05.502 CXX test/cpp_headers/idxd_spec.o 00:03:05.502 CXX test/cpp_headers/init.o 00:03:05.502 CXX test/cpp_headers/ioat.o 00:03:05.502 CXX test/cpp_headers/ioat_spec.o 00:03:05.502 CXX test/cpp_headers/iscsi_spec.o 00:03:05.502 CXX test/cpp_headers/json.o 00:03:05.502 CXX test/cpp_headers/jsonrpc.o 00:03:05.764 LINK spdk_dd 00:03:05.764 CXX test/cpp_headers/keyring.o 00:03:05.764 CXX test/cpp_headers/keyring_module.o 00:03:05.764 CXX test/cpp_headers/likely.o 00:03:05.764 CXX test/cpp_headers/log.o 00:03:05.764 CXX test/cpp_headers/lvol.o 00:03:05.764 CXX test/cpp_headers/md5.o 00:03:05.764 CXX test/cpp_headers/memory.o 00:03:05.764 CXX test/cpp_headers/mmio.o 00:03:05.764 CXX test/cpp_headers/nbd.o 00:03:05.764 CXX test/cpp_headers/net.o 00:03:05.764 CXX test/cpp_headers/notify.o 00:03:05.764 CXX test/cpp_headers/nvme.o 00:03:05.764 LINK spdk_trace 00:03:05.764 CXX test/cpp_headers/nvme_intel.o 00:03:05.764 CXX test/cpp_headers/nvme_ocssd.o 00:03:05.764 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:05.764 CXX test/cpp_headers/nvme_spec.o 00:03:05.764 CXX test/cpp_headers/nvme_zns.o 00:03:05.764 CC test/event/event_perf/event_perf.o 00:03:05.764 CXX test/cpp_headers/nvmf_cmd.o 00:03:05.764 CC test/event/reactor/reactor.o 00:03:05.764 CC test/event/reactor_perf/reactor_perf.o 00:03:05.764 LINK pci_ut 00:03:05.764 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:06.033 CC test/event/app_repeat/app_repeat.o 00:03:06.033 CXX test/cpp_headers/nvmf.o 00:03:06.033 CXX test/cpp_headers/nvmf_spec.o 00:03:06.033 CC test/event/scheduler/scheduler.o 00:03:06.033 CXX test/cpp_headers/nvmf_transport.o 00:03:06.033 CC examples/sock/hello_world/hello_sock.o 00:03:06.033 CC examples/idxd/perf/perf.o 00:03:06.033 CC examples/vmd/lsvmd/lsvmd.o 00:03:06.033 CXX test/cpp_headers/opal.o 00:03:06.033 CXX test/cpp_headers/opal_spec.o 00:03:06.033 CXX test/cpp_headers/pci_ids.o 00:03:06.033 CXX test/cpp_headers/pipe.o 00:03:06.033 CC examples/thread/thread/thread_ex.o 00:03:06.033 CC examples/vmd/led/led.o 00:03:06.033 CXX test/cpp_headers/queue.o 00:03:06.033 LINK nvme_fuzz 00:03:06.033 CXX test/cpp_headers/reduce.o 00:03:06.033 LINK test_dma 00:03:06.033 CXX test/cpp_headers/rpc.o 00:03:06.033 CXX test/cpp_headers/scheduler.o 00:03:06.033 CXX test/cpp_headers/scsi.o 00:03:06.033 LINK spdk_bdev 00:03:06.033 CXX test/cpp_headers/scsi_spec.o 00:03:06.033 CXX test/cpp_headers/sock.o 00:03:06.033 LINK event_perf 00:03:06.033 LINK reactor 00:03:06.033 CXX test/cpp_headers/stdinc.o 00:03:06.292 CXX test/cpp_headers/string.o 00:03:06.292 CXX test/cpp_headers/thread.o 00:03:06.292 CXX test/cpp_headers/trace.o 00:03:06.292 LINK reactor_perf 00:03:06.292 CXX test/cpp_headers/trace_parser.o 00:03:06.292 CXX test/cpp_headers/tree.o 00:03:06.292 CXX test/cpp_headers/ublk.o 00:03:06.292 LINK app_repeat 00:03:06.292 LINK spdk_nvme 00:03:06.292 CXX test/cpp_headers/util.o 00:03:06.292 CXX test/cpp_headers/uuid.o 00:03:06.292 LINK mem_callbacks 00:03:06.292 CXX test/cpp_headers/version.o 00:03:06.292 LINK lsvmd 00:03:06.292 CXX test/cpp_headers/vfio_user_pci.o 00:03:06.292 CXX test/cpp_headers/vfio_user_spec.o 00:03:06.292 CXX test/cpp_headers/vhost.o 00:03:06.292 CXX test/cpp_headers/vmd.o 00:03:06.292 CXX test/cpp_headers/xor.o 00:03:06.292 CXX test/cpp_headers/zipf.o 00:03:06.292 LINK led 00:03:06.292 CC app/vhost/vhost.o 00:03:06.551 LINK scheduler 00:03:06.551 LINK vhost_fuzz 00:03:06.551 LINK hello_sock 00:03:06.551 LINK thread 00:03:06.811 CC test/nvme/reset/reset.o 00:03:06.811 CC test/nvme/reserve/reserve.o 00:03:06.811 CC test/nvme/err_injection/err_injection.o 00:03:06.811 CC test/nvme/overhead/overhead.o 00:03:06.811 LINK idxd_perf 00:03:06.811 CC test/nvme/startup/startup.o 00:03:06.811 CC test/nvme/fdp/fdp.o 00:03:06.811 CC test/nvme/e2edp/nvme_dp.o 00:03:06.811 CC test/nvme/aer/aer.o 00:03:06.811 CC test/nvme/compliance/nvme_compliance.o 00:03:06.811 CC test/nvme/connect_stress/connect_stress.o 00:03:06.811 CC test/nvme/cuse/cuse.o 00:03:06.811 CC test/nvme/simple_copy/simple_copy.o 00:03:06.811 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:06.811 CC test/nvme/sgl/sgl.o 00:03:06.811 CC test/nvme/fused_ordering/fused_ordering.o 00:03:06.811 CC test/nvme/boot_partition/boot_partition.o 00:03:06.811 LINK spdk_nvme_identify 00:03:06.811 CC test/accel/dif/dif.o 00:03:06.811 LINK vhost 00:03:06.811 CC test/blobfs/mkfs/mkfs.o 00:03:06.811 LINK spdk_nvme_perf 00:03:06.811 CC test/lvol/esnap/esnap.o 00:03:06.811 LINK spdk_top 00:03:07.070 LINK err_injection 00:03:07.070 LINK boot_partition 00:03:07.070 LINK startup 00:03:07.070 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:07.070 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:07.070 CC examples/nvme/hotplug/hotplug.o 00:03:07.070 CC examples/nvme/arbitration/arbitration.o 00:03:07.070 CC examples/nvme/abort/abort.o 00:03:07.070 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:07.070 CC examples/nvme/hello_world/hello_world.o 00:03:07.070 CC examples/nvme/reconnect/reconnect.o 00:03:07.070 LINK fused_ordering 00:03:07.070 LINK mkfs 00:03:07.070 CC examples/accel/perf/accel_perf.o 00:03:07.070 LINK doorbell_aers 00:03:07.070 CC examples/blob/hello_world/hello_blob.o 00:03:07.070 LINK connect_stress 00:03:07.070 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:07.070 CC examples/blob/cli/blobcli.o 00:03:07.070 LINK sgl 00:03:07.070 LINK reserve 00:03:07.070 LINK aer 00:03:07.070 LINK overhead 00:03:07.070 LINK simple_copy 00:03:07.329 LINK reset 00:03:07.329 LINK nvme_compliance 00:03:07.329 LINK nvme_dp 00:03:07.329 LINK cmb_copy 00:03:07.329 LINK fdp 00:03:07.329 LINK memory_ut 00:03:07.329 LINK pmr_persistence 00:03:07.329 LINK hotplug 00:03:07.329 LINK hello_world 00:03:07.587 LINK hello_blob 00:03:07.587 LINK arbitration 00:03:07.588 LINK hello_fsdev 00:03:07.588 LINK abort 00:03:07.588 LINK reconnect 00:03:07.846 LINK accel_perf 00:03:07.846 LINK dif 00:03:07.846 LINK nvme_manage 00:03:07.846 LINK blobcli 00:03:08.104 CC examples/bdev/hello_world/hello_bdev.o 00:03:08.104 CC examples/bdev/bdevperf/bdevperf.o 00:03:08.104 CC test/bdev/bdevio/bdevio.o 00:03:08.362 LINK iscsi_fuzz 00:03:08.363 LINK hello_bdev 00:03:08.621 LINK bdevio 00:03:08.621 LINK cuse 00:03:09.188 LINK bdevperf 00:03:09.447 CC examples/nvmf/nvmf/nvmf.o 00:03:10.013 LINK nvmf 00:03:14.198 LINK esnap 00:03:14.198 00:03:14.198 real 1m23.333s 00:03:14.198 user 13m13.726s 00:03:14.198 sys 2m34.740s 00:03:14.198 20:52:47 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:14.198 20:52:47 make -- common/autotest_common.sh@10 -- $ set +x 00:03:14.198 ************************************ 00:03:14.198 END TEST make 00:03:14.198 ************************************ 00:03:14.198 20:52:47 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:14.198 20:52:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:14.198 20:52:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:14.198 20:52:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.198 20:52:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:14.198 20:52:47 -- pm/common@44 -- $ pid=2770233 00:03:14.198 20:52:47 -- pm/common@50 -- $ kill -TERM 2770233 00:03:14.198 20:52:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.198 20:52:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:14.198 20:52:47 -- pm/common@44 -- $ pid=2770235 00:03:14.198 20:52:47 -- pm/common@50 -- $ kill -TERM 2770235 00:03:14.198 20:52:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.198 20:52:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:14.198 20:52:47 -- pm/common@44 -- $ pid=2770237 00:03:14.198 20:52:47 -- pm/common@50 -- $ kill -TERM 2770237 00:03:14.198 20:52:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.198 20:52:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:14.198 20:52:47 -- pm/common@44 -- $ pid=2770266 00:03:14.198 20:52:47 -- pm/common@50 -- $ sudo -E kill -TERM 2770266 00:03:14.198 20:52:47 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:14.198 20:52:47 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:14.198 20:52:47 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:14.198 20:52:47 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:14.198 20:52:47 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:14.198 20:52:47 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:14.198 20:52:47 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:14.198 20:52:47 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:14.198 20:52:47 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:14.198 20:52:47 -- scripts/common.sh@336 -- # IFS=.-: 00:03:14.198 20:52:47 -- scripts/common.sh@336 -- # read -ra ver1 00:03:14.198 20:52:47 -- scripts/common.sh@337 -- # IFS=.-: 00:03:14.198 20:52:47 -- scripts/common.sh@337 -- # read -ra ver2 00:03:14.198 20:52:47 -- scripts/common.sh@338 -- # local 'op=<' 00:03:14.198 20:52:47 -- scripts/common.sh@340 -- # ver1_l=2 00:03:14.198 20:52:47 -- scripts/common.sh@341 -- # ver2_l=1 00:03:14.198 20:52:47 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:14.198 20:52:47 -- scripts/common.sh@344 -- # case "$op" in 00:03:14.198 20:52:47 -- scripts/common.sh@345 -- # : 1 00:03:14.198 20:52:47 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:14.198 20:52:47 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:14.198 20:52:47 -- scripts/common.sh@365 -- # decimal 1 00:03:14.198 20:52:47 -- scripts/common.sh@353 -- # local d=1 00:03:14.198 20:52:47 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:14.198 20:52:47 -- scripts/common.sh@355 -- # echo 1 00:03:14.198 20:52:47 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:14.198 20:52:47 -- scripts/common.sh@366 -- # decimal 2 00:03:14.198 20:52:47 -- scripts/common.sh@353 -- # local d=2 00:03:14.198 20:52:47 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:14.198 20:52:47 -- scripts/common.sh@355 -- # echo 2 00:03:14.198 20:52:47 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:14.198 20:52:47 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:14.198 20:52:47 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:14.198 20:52:47 -- scripts/common.sh@368 -- # return 0 00:03:14.198 20:52:47 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:14.198 20:52:47 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:14.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.198 --rc genhtml_branch_coverage=1 00:03:14.198 --rc genhtml_function_coverage=1 00:03:14.198 --rc genhtml_legend=1 00:03:14.198 --rc geninfo_all_blocks=1 00:03:14.198 --rc geninfo_unexecuted_blocks=1 00:03:14.198 00:03:14.198 ' 00:03:14.198 20:52:47 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:14.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.198 --rc genhtml_branch_coverage=1 00:03:14.198 --rc genhtml_function_coverage=1 00:03:14.198 --rc genhtml_legend=1 00:03:14.198 --rc geninfo_all_blocks=1 00:03:14.198 --rc geninfo_unexecuted_blocks=1 00:03:14.198 00:03:14.198 ' 00:03:14.198 20:52:47 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:14.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.198 --rc genhtml_branch_coverage=1 00:03:14.198 --rc genhtml_function_coverage=1 00:03:14.198 --rc genhtml_legend=1 00:03:14.198 --rc geninfo_all_blocks=1 00:03:14.198 --rc geninfo_unexecuted_blocks=1 00:03:14.198 00:03:14.198 ' 00:03:14.198 20:52:47 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:14.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.198 --rc genhtml_branch_coverage=1 00:03:14.198 --rc genhtml_function_coverage=1 00:03:14.198 --rc genhtml_legend=1 00:03:14.198 --rc geninfo_all_blocks=1 00:03:14.198 --rc geninfo_unexecuted_blocks=1 00:03:14.198 00:03:14.198 ' 00:03:14.198 20:52:47 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:14.198 20:52:47 -- nvmf/common.sh@7 -- # uname -s 00:03:14.198 20:52:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:14.198 20:52:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:14.198 20:52:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:14.198 20:52:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:14.198 20:52:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:14.198 20:52:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:14.198 20:52:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:14.198 20:52:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:14.458 20:52:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:14.458 20:52:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:14.458 20:52:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:14.458 20:52:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:14.458 20:52:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:14.458 20:52:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:14.458 20:52:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:14.458 20:52:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:14.458 20:52:47 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:14.458 20:52:47 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:14.458 20:52:48 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:14.458 20:52:48 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:14.458 20:52:48 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:14.458 20:52:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:14.458 20:52:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:14.458 20:52:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:14.458 20:52:48 -- paths/export.sh@5 -- # export PATH 00:03:14.458 20:52:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:14.458 20:52:48 -- nvmf/common.sh@51 -- # : 0 00:03:14.458 20:52:48 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:14.458 20:52:48 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:14.458 20:52:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:14.458 20:52:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:14.458 20:52:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:14.458 20:52:48 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:14.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:14.458 20:52:48 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:14.458 20:52:48 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:14.458 20:52:48 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:14.458 20:52:48 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:14.458 20:52:48 -- spdk/autotest.sh@32 -- # uname -s 00:03:14.458 20:52:48 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:14.458 20:52:48 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:14.458 20:52:48 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:14.458 20:52:48 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:14.458 20:52:48 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:14.458 20:52:48 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:14.458 20:52:48 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:14.458 20:52:48 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:14.458 20:52:48 -- spdk/autotest.sh@48 -- # udevadm_pid=2831255 00:03:14.458 20:52:48 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:14.458 20:52:48 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:14.458 20:52:48 -- pm/common@17 -- # local monitor 00:03:14.458 20:52:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.458 20:52:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.458 20:52:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.458 20:52:48 -- pm/common@21 -- # date +%s 00:03:14.458 20:52:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:14.458 20:52:48 -- pm/common@21 -- # date +%s 00:03:14.458 20:52:48 -- pm/common@25 -- # sleep 1 00:03:14.458 20:52:48 -- pm/common@21 -- # date +%s 00:03:14.458 20:52:48 -- pm/common@21 -- # date +%s 00:03:14.458 20:52:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732045968 00:03:14.458 20:52:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732045968 00:03:14.458 20:52:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732045968 00:03:14.458 20:52:48 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732045968 00:03:14.459 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732045968_collect-vmstat.pm.log 00:03:14.459 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732045968_collect-cpu-load.pm.log 00:03:14.459 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732045968_collect-cpu-temp.pm.log 00:03:14.459 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732045968_collect-bmc-pm.bmc.pm.log 00:03:15.394 20:52:49 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:15.394 20:52:49 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:15.394 20:52:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:15.394 20:52:49 -- common/autotest_common.sh@10 -- # set +x 00:03:15.394 20:52:49 -- spdk/autotest.sh@59 -- # create_test_list 00:03:15.394 20:52:49 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:15.394 20:52:49 -- common/autotest_common.sh@10 -- # set +x 00:03:15.394 20:52:49 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:15.394 20:52:49 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:15.394 20:52:49 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:15.394 20:52:49 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:15.394 20:52:49 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:15.394 20:52:49 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:15.394 20:52:49 -- common/autotest_common.sh@1457 -- # uname 00:03:15.394 20:52:49 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:15.394 20:52:49 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:15.394 20:52:49 -- common/autotest_common.sh@1477 -- # uname 00:03:15.394 20:52:49 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:15.394 20:52:49 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:15.394 20:52:49 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:15.394 lcov: LCOV version 1.15 00:03:15.394 20:52:49 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:41.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:41.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:51.902 20:53:24 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:51.902 20:53:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.902 20:53:24 -- common/autotest_common.sh@10 -- # set +x 00:03:51.902 20:53:24 -- spdk/autotest.sh@78 -- # rm -f 00:03:51.902 20:53:24 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.902 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:51.902 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:51.902 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:51.902 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:51.902 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:51.902 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:51.902 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:51.902 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:51.902 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:51.902 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:52.160 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:52.160 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:52.160 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:52.160 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:52.160 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:52.160 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:52.160 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:52.160 20:53:25 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:52.160 20:53:25 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:52.160 20:53:25 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:52.160 20:53:25 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:52.160 20:53:25 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:52.160 20:53:25 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:52.160 20:53:25 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:52.160 20:53:25 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:52.160 20:53:25 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:52.160 20:53:25 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:52.160 20:53:25 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:52.160 20:53:25 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:52.160 20:53:25 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:52.160 20:53:25 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:52.160 20:53:25 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:52.419 No valid GPT data, bailing 00:03:52.419 20:53:25 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:52.419 20:53:25 -- scripts/common.sh@394 -- # pt= 00:03:52.419 20:53:25 -- scripts/common.sh@395 -- # return 1 00:03:52.419 20:53:25 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:52.419 1+0 records in 00:03:52.419 1+0 records out 00:03:52.419 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00164087 s, 639 MB/s 00:03:52.419 20:53:25 -- spdk/autotest.sh@105 -- # sync 00:03:52.419 20:53:25 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:52.419 20:53:25 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:52.419 20:53:25 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:54.323 20:53:28 -- spdk/autotest.sh@111 -- # uname -s 00:03:54.323 20:53:28 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:54.323 20:53:28 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:54.323 20:53:28 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:55.257 Hugepages 00:03:55.257 node hugesize free / total 00:03:55.257 node0 1048576kB 0 / 0 00:03:55.257 node0 2048kB 0 / 0 00:03:55.257 node1 1048576kB 0 / 0 00:03:55.257 node1 2048kB 0 / 0 00:03:55.257 00:03:55.257 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:55.516 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:55.516 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:55.516 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:55.516 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:55.516 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:55.516 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:55.516 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:55.516 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:55.516 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:55.517 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:55.517 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:55.517 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:55.517 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:55.517 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:55.517 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:55.517 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:55.517 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:55.517 20:53:29 -- spdk/autotest.sh@117 -- # uname -s 00:03:55.517 20:53:29 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:55.517 20:53:29 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:55.517 20:53:29 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:56.893 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:56.893 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:56.893 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:56.893 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:56.893 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:56.893 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:56.893 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:56.893 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:56.893 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:56.893 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:56.893 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:56.893 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:56.893 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:56.893 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:56.893 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:56.893 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:57.830 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:57.830 20:53:31 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:58.768 20:53:32 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:58.768 20:53:32 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:58.768 20:53:32 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:58.768 20:53:32 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:58.768 20:53:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:58.768 20:53:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:58.768 20:53:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:58.768 20:53:32 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:58.768 20:53:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:58.768 20:53:32 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:58.768 20:53:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:58.768 20:53:32 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:00.142 Waiting for block devices as requested 00:04:00.142 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:00.142 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:00.142 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:00.401 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:00.401 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:00.401 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:00.401 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:00.659 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:00.659 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:00.659 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:00.659 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:00.917 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:00.917 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:00.917 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:01.175 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:01.175 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:01.175 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:01.433 20:53:34 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:01.433 20:53:34 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:01.433 20:53:34 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:01.433 20:53:34 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:04:01.433 20:53:34 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:01.433 20:53:34 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:01.433 20:53:34 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:01.433 20:53:34 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:01.433 20:53:34 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:01.433 20:53:34 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:01.433 20:53:34 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:01.433 20:53:34 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:01.433 20:53:34 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:01.433 20:53:34 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:01.433 20:53:34 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:01.433 20:53:34 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:01.433 20:53:34 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:01.433 20:53:34 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:01.433 20:53:34 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:01.433 20:53:34 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:01.433 20:53:34 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:01.433 20:53:34 -- common/autotest_common.sh@1543 -- # continue 00:04:01.433 20:53:34 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:01.433 20:53:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:01.433 20:53:34 -- common/autotest_common.sh@10 -- # set +x 00:04:01.433 20:53:35 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:01.433 20:53:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:01.433 20:53:35 -- common/autotest_common.sh@10 -- # set +x 00:04:01.433 20:53:35 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.808 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:02.808 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:02.808 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:02.808 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:02.808 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:02.808 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:02.808 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:02.808 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:02.808 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:02.808 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:02.808 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:02.808 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:02.808 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:02.808 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:02.808 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:02.808 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:03.743 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:03.743 20:53:37 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:03.743 20:53:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:03.743 20:53:37 -- common/autotest_common.sh@10 -- # set +x 00:04:03.743 20:53:37 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:03.743 20:53:37 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:03.743 20:53:37 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:03.743 20:53:37 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:03.743 20:53:37 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:03.743 20:53:37 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:03.743 20:53:37 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:03.743 20:53:37 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:03.743 20:53:37 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:03.743 20:53:37 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:03.743 20:53:37 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.743 20:53:37 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:03.743 20:53:37 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:03.743 20:53:37 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:03.743 20:53:37 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:04:03.743 20:53:37 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:03.743 20:53:37 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:03.743 20:53:37 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:03.743 20:53:37 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:03.743 20:53:37 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:03.743 20:53:37 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:03.743 20:53:37 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:04:03.743 20:53:37 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:04:03.743 20:53:37 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2841463 00:04:03.743 20:53:37 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.743 20:53:37 -- common/autotest_common.sh@1585 -- # waitforlisten 2841463 00:04:03.743 20:53:37 -- common/autotest_common.sh@835 -- # '[' -z 2841463 ']' 00:04:03.743 20:53:37 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.743 20:53:37 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.743 20:53:37 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.743 20:53:37 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.743 20:53:37 -- common/autotest_common.sh@10 -- # set +x 00:04:04.002 [2024-11-19 20:53:37.587175] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:04.002 [2024-11-19 20:53:37.587325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2841463 ] 00:04:04.002 [2024-11-19 20:53:37.724653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.260 [2024-11-19 20:53:37.863026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.195 20:53:38 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:05.195 20:53:38 -- common/autotest_common.sh@868 -- # return 0 00:04:05.195 20:53:38 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:05.195 20:53:38 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:05.195 20:53:38 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:08.500 nvme0n1 00:04:08.500 20:53:41 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:08.500 [2024-11-19 20:53:42.206574] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:08.500 [2024-11-19 20:53:42.206637] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:08.500 request: 00:04:08.500 { 00:04:08.500 "nvme_ctrlr_name": "nvme0", 00:04:08.500 "password": "test", 00:04:08.500 "method": "bdev_nvme_opal_revert", 00:04:08.500 "req_id": 1 00:04:08.500 } 00:04:08.500 Got JSON-RPC error response 00:04:08.500 response: 00:04:08.500 { 00:04:08.500 "code": -32603, 00:04:08.500 "message": "Internal error" 00:04:08.500 } 00:04:08.500 20:53:42 -- common/autotest_common.sh@1591 -- # true 00:04:08.500 20:53:42 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:08.500 20:53:42 -- common/autotest_common.sh@1595 -- # killprocess 2841463 00:04:08.500 20:53:42 -- common/autotest_common.sh@954 -- # '[' -z 2841463 ']' 00:04:08.500 20:53:42 -- common/autotest_common.sh@958 -- # kill -0 2841463 00:04:08.500 20:53:42 -- common/autotest_common.sh@959 -- # uname 00:04:08.500 20:53:42 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:08.500 20:53:42 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2841463 00:04:08.500 20:53:42 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:08.500 20:53:42 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:08.500 20:53:42 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2841463' 00:04:08.500 killing process with pid 2841463 00:04:08.500 20:53:42 -- common/autotest_common.sh@973 -- # kill 2841463 00:04:08.500 20:53:42 -- common/autotest_common.sh@978 -- # wait 2841463 00:04:12.708 20:53:45 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:12.708 20:53:45 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:12.708 20:53:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:12.708 20:53:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:12.708 20:53:45 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:12.708 20:53:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:12.708 20:53:45 -- common/autotest_common.sh@10 -- # set +x 00:04:12.708 20:53:45 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:12.708 20:53:45 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:12.708 20:53:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.708 20:53:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.708 20:53:45 -- common/autotest_common.sh@10 -- # set +x 00:04:12.708 ************************************ 00:04:12.708 START TEST env 00:04:12.708 ************************************ 00:04:12.708 20:53:45 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:12.708 * Looking for test storage... 00:04:12.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:12.708 20:53:45 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:12.708 20:53:45 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:12.708 20:53:46 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:12.708 20:53:46 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:12.708 20:53:46 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.708 20:53:46 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.708 20:53:46 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.708 20:53:46 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.708 20:53:46 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.708 20:53:46 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.708 20:53:46 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.708 20:53:46 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.708 20:53:46 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.708 20:53:46 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.708 20:53:46 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.708 20:53:46 env -- scripts/common.sh@344 -- # case "$op" in 00:04:12.708 20:53:46 env -- scripts/common.sh@345 -- # : 1 00:04:12.708 20:53:46 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.708 20:53:46 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.708 20:53:46 env -- scripts/common.sh@365 -- # decimal 1 00:04:12.708 20:53:46 env -- scripts/common.sh@353 -- # local d=1 00:04:12.708 20:53:46 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.708 20:53:46 env -- scripts/common.sh@355 -- # echo 1 00:04:12.708 20:53:46 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.708 20:53:46 env -- scripts/common.sh@366 -- # decimal 2 00:04:12.708 20:53:46 env -- scripts/common.sh@353 -- # local d=2 00:04:12.708 20:53:46 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.708 20:53:46 env -- scripts/common.sh@355 -- # echo 2 00:04:12.708 20:53:46 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.708 20:53:46 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.708 20:53:46 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.708 20:53:46 env -- scripts/common.sh@368 -- # return 0 00:04:12.708 20:53:46 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.708 20:53:46 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.708 --rc genhtml_branch_coverage=1 00:04:12.708 --rc genhtml_function_coverage=1 00:04:12.708 --rc genhtml_legend=1 00:04:12.708 --rc geninfo_all_blocks=1 00:04:12.708 --rc geninfo_unexecuted_blocks=1 00:04:12.708 00:04:12.708 ' 00:04:12.708 20:53:46 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.708 --rc genhtml_branch_coverage=1 00:04:12.708 --rc genhtml_function_coverage=1 00:04:12.708 --rc genhtml_legend=1 00:04:12.708 --rc geninfo_all_blocks=1 00:04:12.708 --rc geninfo_unexecuted_blocks=1 00:04:12.708 00:04:12.708 ' 00:04:12.708 20:53:46 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.708 --rc genhtml_branch_coverage=1 00:04:12.708 --rc genhtml_function_coverage=1 00:04:12.708 --rc genhtml_legend=1 00:04:12.708 --rc geninfo_all_blocks=1 00:04:12.708 --rc geninfo_unexecuted_blocks=1 00:04:12.708 00:04:12.708 ' 00:04:12.708 20:53:46 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:12.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.708 --rc genhtml_branch_coverage=1 00:04:12.708 --rc genhtml_function_coverage=1 00:04:12.708 --rc genhtml_legend=1 00:04:12.708 --rc geninfo_all_blocks=1 00:04:12.708 --rc geninfo_unexecuted_blocks=1 00:04:12.708 00:04:12.708 ' 00:04:12.708 20:53:46 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:12.709 20:53:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.709 20:53:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.709 20:53:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.709 ************************************ 00:04:12.709 START TEST env_memory 00:04:12.709 ************************************ 00:04:12.709 20:53:46 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:12.709 00:04:12.709 00:04:12.709 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.709 http://cunit.sourceforge.net/ 00:04:12.709 00:04:12.709 00:04:12.709 Suite: memory 00:04:12.709 Test: alloc and free memory map ...[2024-11-19 20:53:46.161110] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:12.709 passed 00:04:12.709 Test: mem map translation ...[2024-11-19 20:53:46.201199] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:12.709 [2024-11-19 20:53:46.201240] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:12.709 [2024-11-19 20:53:46.201325] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:12.709 [2024-11-19 20:53:46.201356] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:12.709 passed 00:04:12.709 Test: mem map registration ...[2024-11-19 20:53:46.269749] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:12.709 [2024-11-19 20:53:46.269798] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:12.709 passed 00:04:12.709 Test: mem map adjacent registrations ...passed 00:04:12.709 00:04:12.709 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.709 suites 1 1 n/a 0 0 00:04:12.709 tests 4 4 4 0 0 00:04:12.709 asserts 152 152 152 0 n/a 00:04:12.709 00:04:12.709 Elapsed time = 0.244 seconds 00:04:12.709 00:04:12.709 real 0m0.264s 00:04:12.709 user 0m0.247s 00:04:12.709 sys 0m0.016s 00:04:12.709 20:53:46 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.709 20:53:46 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:12.709 ************************************ 00:04:12.709 END TEST env_memory 00:04:12.709 ************************************ 00:04:12.709 20:53:46 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:12.709 20:53:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.709 20:53:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.709 20:53:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.709 ************************************ 00:04:12.709 START TEST env_vtophys 00:04:12.709 ************************************ 00:04:12.709 20:53:46 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:12.709 EAL: lib.eal log level changed from notice to debug 00:04:12.709 EAL: Detected lcore 0 as core 0 on socket 0 00:04:12.709 EAL: Detected lcore 1 as core 1 on socket 0 00:04:12.709 EAL: Detected lcore 2 as core 2 on socket 0 00:04:12.709 EAL: Detected lcore 3 as core 3 on socket 0 00:04:12.709 EAL: Detected lcore 4 as core 4 on socket 0 00:04:12.709 EAL: Detected lcore 5 as core 5 on socket 0 00:04:12.709 EAL: Detected lcore 6 as core 8 on socket 0 00:04:12.709 EAL: Detected lcore 7 as core 9 on socket 0 00:04:12.709 EAL: Detected lcore 8 as core 10 on socket 0 00:04:12.709 EAL: Detected lcore 9 as core 11 on socket 0 00:04:12.709 EAL: Detected lcore 10 as core 12 on socket 0 00:04:12.709 EAL: Detected lcore 11 as core 13 on socket 0 00:04:12.709 EAL: Detected lcore 12 as core 0 on socket 1 00:04:12.709 EAL: Detected lcore 13 as core 1 on socket 1 00:04:12.709 EAL: Detected lcore 14 as core 2 on socket 1 00:04:12.709 EAL: Detected lcore 15 as core 3 on socket 1 00:04:12.709 EAL: Detected lcore 16 as core 4 on socket 1 00:04:12.709 EAL: Detected lcore 17 as core 5 on socket 1 00:04:12.709 EAL: Detected lcore 18 as core 8 on socket 1 00:04:12.709 EAL: Detected lcore 19 as core 9 on socket 1 00:04:12.709 EAL: Detected lcore 20 as core 10 on socket 1 00:04:12.709 EAL: Detected lcore 21 as core 11 on socket 1 00:04:12.709 EAL: Detected lcore 22 as core 12 on socket 1 00:04:12.709 EAL: Detected lcore 23 as core 13 on socket 1 00:04:12.709 EAL: Detected lcore 24 as core 0 on socket 0 00:04:12.709 EAL: Detected lcore 25 as core 1 on socket 0 00:04:12.709 EAL: Detected lcore 26 as core 2 on socket 0 00:04:12.709 EAL: Detected lcore 27 as core 3 on socket 0 00:04:12.709 EAL: Detected lcore 28 as core 4 on socket 0 00:04:12.709 EAL: Detected lcore 29 as core 5 on socket 0 00:04:12.709 EAL: Detected lcore 30 as core 8 on socket 0 00:04:12.709 EAL: Detected lcore 31 as core 9 on socket 0 00:04:12.709 EAL: Detected lcore 32 as core 10 on socket 0 00:04:12.709 EAL: Detected lcore 33 as core 11 on socket 0 00:04:12.709 EAL: Detected lcore 34 as core 12 on socket 0 00:04:12.709 EAL: Detected lcore 35 as core 13 on socket 0 00:04:12.709 EAL: Detected lcore 36 as core 0 on socket 1 00:04:12.709 EAL: Detected lcore 37 as core 1 on socket 1 00:04:12.709 EAL: Detected lcore 38 as core 2 on socket 1 00:04:12.709 EAL: Detected lcore 39 as core 3 on socket 1 00:04:12.709 EAL: Detected lcore 40 as core 4 on socket 1 00:04:12.709 EAL: Detected lcore 41 as core 5 on socket 1 00:04:12.709 EAL: Detected lcore 42 as core 8 on socket 1 00:04:12.709 EAL: Detected lcore 43 as core 9 on socket 1 00:04:12.709 EAL: Detected lcore 44 as core 10 on socket 1 00:04:12.709 EAL: Detected lcore 45 as core 11 on socket 1 00:04:12.709 EAL: Detected lcore 46 as core 12 on socket 1 00:04:12.709 EAL: Detected lcore 47 as core 13 on socket 1 00:04:12.709 EAL: Maximum logical cores by configuration: 128 00:04:12.709 EAL: Detected CPU lcores: 48 00:04:12.709 EAL: Detected NUMA nodes: 2 00:04:12.709 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:12.709 EAL: Detected shared linkage of DPDK 00:04:12.709 EAL: No shared files mode enabled, IPC will be disabled 00:04:12.968 EAL: Bus pci wants IOVA as 'DC' 00:04:12.968 EAL: Buses did not request a specific IOVA mode. 00:04:12.968 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:12.968 EAL: Selected IOVA mode 'VA' 00:04:12.968 EAL: Probing VFIO support... 00:04:12.968 EAL: IOMMU type 1 (Type 1) is supported 00:04:12.968 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:12.968 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:12.968 EAL: VFIO support initialized 00:04:12.968 EAL: Ask a virtual area of 0x2e000 bytes 00:04:12.968 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:12.968 EAL: Setting up physically contiguous memory... 00:04:12.968 EAL: Setting maximum number of open files to 524288 00:04:12.968 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:12.968 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:12.968 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:12.968 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.968 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:12.968 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:12.968 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.968 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:12.968 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:12.968 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.968 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:12.968 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:12.968 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.968 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:12.968 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:12.968 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.968 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:12.968 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:12.968 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.968 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:12.968 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:12.968 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.968 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:12.968 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:12.968 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.968 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:12.968 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:12.968 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:12.968 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.968 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:12.968 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:12.968 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.968 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:12.968 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:12.968 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.968 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:12.968 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:12.968 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.968 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:12.968 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:12.968 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.968 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:12.968 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:12.968 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.968 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:12.968 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:12.968 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.968 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:12.968 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:12.968 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.968 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:12.968 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:12.968 EAL: Hugepages will be freed exactly as allocated. 00:04:12.968 EAL: No shared files mode enabled, IPC is disabled 00:04:12.968 EAL: No shared files mode enabled, IPC is disabled 00:04:12.968 EAL: TSC frequency is ~2700000 KHz 00:04:12.968 EAL: Main lcore 0 is ready (tid=7fef37026a40;cpuset=[0]) 00:04:12.968 EAL: Trying to obtain current memory policy. 00:04:12.968 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.968 EAL: Restoring previous memory policy: 0 00:04:12.968 EAL: request: mp_malloc_sync 00:04:12.968 EAL: No shared files mode enabled, IPC is disabled 00:04:12.968 EAL: Heap on socket 0 was expanded by 2MB 00:04:12.968 EAL: No shared files mode enabled, IPC is disabled 00:04:12.968 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:12.968 EAL: Mem event callback 'spdk:(nil)' registered 00:04:12.968 00:04:12.968 00:04:12.968 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.968 http://cunit.sourceforge.net/ 00:04:12.968 00:04:12.968 00:04:12.968 Suite: components_suite 00:04:13.226 Test: vtophys_malloc_test ...passed 00:04:13.226 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:13.226 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.226 EAL: Restoring previous memory policy: 4 00:04:13.226 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.226 EAL: request: mp_malloc_sync 00:04:13.227 EAL: No shared files mode enabled, IPC is disabled 00:04:13.227 EAL: Heap on socket 0 was expanded by 4MB 00:04:13.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.484 EAL: request: mp_malloc_sync 00:04:13.484 EAL: No shared files mode enabled, IPC is disabled 00:04:13.484 EAL: Heap on socket 0 was shrunk by 4MB 00:04:13.484 EAL: Trying to obtain current memory policy. 00:04:13.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.484 EAL: Restoring previous memory policy: 4 00:04:13.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.484 EAL: request: mp_malloc_sync 00:04:13.484 EAL: No shared files mode enabled, IPC is disabled 00:04:13.484 EAL: Heap on socket 0 was expanded by 6MB 00:04:13.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.484 EAL: request: mp_malloc_sync 00:04:13.485 EAL: No shared files mode enabled, IPC is disabled 00:04:13.485 EAL: Heap on socket 0 was shrunk by 6MB 00:04:13.485 EAL: Trying to obtain current memory policy. 00:04:13.485 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.485 EAL: Restoring previous memory policy: 4 00:04:13.485 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.485 EAL: request: mp_malloc_sync 00:04:13.485 EAL: No shared files mode enabled, IPC is disabled 00:04:13.485 EAL: Heap on socket 0 was expanded by 10MB 00:04:13.485 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.485 EAL: request: mp_malloc_sync 00:04:13.485 EAL: No shared files mode enabled, IPC is disabled 00:04:13.485 EAL: Heap on socket 0 was shrunk by 10MB 00:04:13.485 EAL: Trying to obtain current memory policy. 00:04:13.485 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.485 EAL: Restoring previous memory policy: 4 00:04:13.485 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.485 EAL: request: mp_malloc_sync 00:04:13.485 EAL: No shared files mode enabled, IPC is disabled 00:04:13.485 EAL: Heap on socket 0 was expanded by 18MB 00:04:13.485 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.485 EAL: request: mp_malloc_sync 00:04:13.485 EAL: No shared files mode enabled, IPC is disabled 00:04:13.485 EAL: Heap on socket 0 was shrunk by 18MB 00:04:13.485 EAL: Trying to obtain current memory policy. 00:04:13.485 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.485 EAL: Restoring previous memory policy: 4 00:04:13.485 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.485 EAL: request: mp_malloc_sync 00:04:13.485 EAL: No shared files mode enabled, IPC is disabled 00:04:13.485 EAL: Heap on socket 0 was expanded by 34MB 00:04:13.485 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.485 EAL: request: mp_malloc_sync 00:04:13.485 EAL: No shared files mode enabled, IPC is disabled 00:04:13.485 EAL: Heap on socket 0 was shrunk by 34MB 00:04:13.485 EAL: Trying to obtain current memory policy. 00:04:13.485 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.485 EAL: Restoring previous memory policy: 4 00:04:13.485 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.485 EAL: request: mp_malloc_sync 00:04:13.485 EAL: No shared files mode enabled, IPC is disabled 00:04:13.485 EAL: Heap on socket 0 was expanded by 66MB 00:04:13.742 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.742 EAL: request: mp_malloc_sync 00:04:13.742 EAL: No shared files mode enabled, IPC is disabled 00:04:13.742 EAL: Heap on socket 0 was shrunk by 66MB 00:04:13.742 EAL: Trying to obtain current memory policy. 00:04:13.742 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.742 EAL: Restoring previous memory policy: 4 00:04:13.742 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.742 EAL: request: mp_malloc_sync 00:04:13.742 EAL: No shared files mode enabled, IPC is disabled 00:04:13.742 EAL: Heap on socket 0 was expanded by 130MB 00:04:13.998 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.256 EAL: request: mp_malloc_sync 00:04:14.256 EAL: No shared files mode enabled, IPC is disabled 00:04:14.256 EAL: Heap on socket 0 was shrunk by 130MB 00:04:14.256 EAL: Trying to obtain current memory policy. 00:04:14.256 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.514 EAL: Restoring previous memory policy: 4 00:04:14.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.514 EAL: request: mp_malloc_sync 00:04:14.514 EAL: No shared files mode enabled, IPC is disabled 00:04:14.514 EAL: Heap on socket 0 was expanded by 258MB 00:04:14.772 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.030 EAL: request: mp_malloc_sync 00:04:15.030 EAL: No shared files mode enabled, IPC is disabled 00:04:15.030 EAL: Heap on socket 0 was shrunk by 258MB 00:04:15.289 EAL: Trying to obtain current memory policy. 00:04:15.289 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.547 EAL: Restoring previous memory policy: 4 00:04:15.547 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.547 EAL: request: mp_malloc_sync 00:04:15.547 EAL: No shared files mode enabled, IPC is disabled 00:04:15.547 EAL: Heap on socket 0 was expanded by 514MB 00:04:16.482 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.482 EAL: request: mp_malloc_sync 00:04:16.482 EAL: No shared files mode enabled, IPC is disabled 00:04:16.482 EAL: Heap on socket 0 was shrunk by 514MB 00:04:17.416 EAL: Trying to obtain current memory policy. 00:04:17.416 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.675 EAL: Restoring previous memory policy: 4 00:04:17.675 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.675 EAL: request: mp_malloc_sync 00:04:17.675 EAL: No shared files mode enabled, IPC is disabled 00:04:17.675 EAL: Heap on socket 0 was expanded by 1026MB 00:04:19.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.576 EAL: request: mp_malloc_sync 00:04:19.576 EAL: No shared files mode enabled, IPC is disabled 00:04:19.576 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:21.478 passed 00:04:21.478 00:04:21.478 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.478 suites 1 1 n/a 0 0 00:04:21.478 tests 2 2 2 0 0 00:04:21.478 asserts 497 497 497 0 n/a 00:04:21.478 00:04:21.478 Elapsed time = 8.254 seconds 00:04:21.478 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.478 EAL: request: mp_malloc_sync 00:04:21.478 EAL: No shared files mode enabled, IPC is disabled 00:04:21.478 EAL: Heap on socket 0 was shrunk by 2MB 00:04:21.478 EAL: No shared files mode enabled, IPC is disabled 00:04:21.478 EAL: No shared files mode enabled, IPC is disabled 00:04:21.478 EAL: No shared files mode enabled, IPC is disabled 00:04:21.478 00:04:21.478 real 0m8.537s 00:04:21.478 user 0m7.427s 00:04:21.478 sys 0m1.052s 00:04:21.478 20:53:54 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.478 20:53:54 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:21.478 ************************************ 00:04:21.478 END TEST env_vtophys 00:04:21.478 ************************************ 00:04:21.478 20:53:54 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:21.478 20:53:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.478 20:53:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.478 20:53:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.478 ************************************ 00:04:21.478 START TEST env_pci 00:04:21.478 ************************************ 00:04:21.478 20:53:55 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:21.478 00:04:21.478 00:04:21.478 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.478 http://cunit.sourceforge.net/ 00:04:21.478 00:04:21.478 00:04:21.478 Suite: pci 00:04:21.478 Test: pci_hook ...[2024-11-19 20:53:55.044339] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2843559 has claimed it 00:04:21.478 EAL: Cannot find device (10000:00:01.0) 00:04:21.478 EAL: Failed to attach device on primary process 00:04:21.478 passed 00:04:21.478 00:04:21.478 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.478 suites 1 1 n/a 0 0 00:04:21.478 tests 1 1 1 0 0 00:04:21.478 asserts 25 25 25 0 n/a 00:04:21.478 00:04:21.478 Elapsed time = 0.045 seconds 00:04:21.478 00:04:21.478 real 0m0.099s 00:04:21.478 user 0m0.038s 00:04:21.478 sys 0m0.061s 00:04:21.478 20:53:55 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.478 20:53:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:21.478 ************************************ 00:04:21.478 END TEST env_pci 00:04:21.478 ************************************ 00:04:21.478 20:53:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:21.478 20:53:55 env -- env/env.sh@15 -- # uname 00:04:21.478 20:53:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:21.478 20:53:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:21.478 20:53:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:21.478 20:53:55 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:21.479 20:53:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.479 20:53:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.479 ************************************ 00:04:21.479 START TEST env_dpdk_post_init 00:04:21.479 ************************************ 00:04:21.479 20:53:55 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:21.479 EAL: Detected CPU lcores: 48 00:04:21.479 EAL: Detected NUMA nodes: 2 00:04:21.479 EAL: Detected shared linkage of DPDK 00:04:21.479 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:21.738 EAL: Selected IOVA mode 'VA' 00:04:21.738 EAL: VFIO support initialized 00:04:21.738 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:21.738 EAL: Using IOMMU type 1 (Type 1) 00:04:21.738 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:21.738 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:21.738 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:21.738 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:21.738 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:21.738 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:21.738 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:21.738 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:21.738 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:21.996 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:21.996 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:21.996 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:21.996 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:21.996 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:21.996 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:21.996 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:22.930 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:26.210 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:26.210 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:26.210 Starting DPDK initialization... 00:04:26.210 Starting SPDK post initialization... 00:04:26.210 SPDK NVMe probe 00:04:26.210 Attaching to 0000:88:00.0 00:04:26.210 Attached to 0000:88:00.0 00:04:26.210 Cleaning up... 00:04:26.210 00:04:26.210 real 0m4.617s 00:04:26.210 user 0m3.176s 00:04:26.210 sys 0m0.493s 00:04:26.210 20:53:59 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.210 20:53:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:26.210 ************************************ 00:04:26.210 END TEST env_dpdk_post_init 00:04:26.210 ************************************ 00:04:26.210 20:53:59 env -- env/env.sh@26 -- # uname 00:04:26.210 20:53:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:26.210 20:53:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:26.210 20:53:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.210 20:53:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.210 20:53:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.210 ************************************ 00:04:26.210 START TEST env_mem_callbacks 00:04:26.210 ************************************ 00:04:26.210 20:53:59 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:26.210 EAL: Detected CPU lcores: 48 00:04:26.210 EAL: Detected NUMA nodes: 2 00:04:26.210 EAL: Detected shared linkage of DPDK 00:04:26.210 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:26.210 EAL: Selected IOVA mode 'VA' 00:04:26.210 EAL: VFIO support initialized 00:04:26.210 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:26.210 00:04:26.210 00:04:26.210 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.210 http://cunit.sourceforge.net/ 00:04:26.210 00:04:26.210 00:04:26.210 Suite: memory 00:04:26.210 Test: test ... 00:04:26.210 register 0x200000200000 2097152 00:04:26.210 malloc 3145728 00:04:26.210 register 0x200000400000 4194304 00:04:26.210 buf 0x2000004fffc0 len 3145728 PASSED 00:04:26.210 malloc 64 00:04:26.210 buf 0x2000004ffec0 len 64 PASSED 00:04:26.210 malloc 4194304 00:04:26.210 register 0x200000800000 6291456 00:04:26.210 buf 0x2000009fffc0 len 4194304 PASSED 00:04:26.210 free 0x2000004fffc0 3145728 00:04:26.210 free 0x2000004ffec0 64 00:04:26.210 unregister 0x200000400000 4194304 PASSED 00:04:26.210 free 0x2000009fffc0 4194304 00:04:26.210 unregister 0x200000800000 6291456 PASSED 00:04:26.210 malloc 8388608 00:04:26.210 register 0x200000400000 10485760 00:04:26.210 buf 0x2000005fffc0 len 8388608 PASSED 00:04:26.210 free 0x2000005fffc0 8388608 00:04:26.210 unregister 0x200000400000 10485760 PASSED 00:04:26.210 passed 00:04:26.210 00:04:26.210 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.210 suites 1 1 n/a 0 0 00:04:26.210 tests 1 1 1 0 0 00:04:26.210 asserts 15 15 15 0 n/a 00:04:26.210 00:04:26.210 Elapsed time = 0.060 seconds 00:04:26.469 00:04:26.469 real 0m0.185s 00:04:26.469 user 0m0.102s 00:04:26.469 sys 0m0.081s 00:04:26.469 20:54:00 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.469 20:54:00 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:26.469 ************************************ 00:04:26.469 END TEST env_mem_callbacks 00:04:26.469 ************************************ 00:04:26.469 00:04:26.469 real 0m14.089s 00:04:26.469 user 0m11.193s 00:04:26.469 sys 0m1.911s 00:04:26.469 20:54:00 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.469 20:54:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.469 ************************************ 00:04:26.469 END TEST env 00:04:26.469 ************************************ 00:04:26.469 20:54:00 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:26.469 20:54:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.469 20:54:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.469 20:54:00 -- common/autotest_common.sh@10 -- # set +x 00:04:26.469 ************************************ 00:04:26.469 START TEST rpc 00:04:26.469 ************************************ 00:04:26.469 20:54:00 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:26.469 * Looking for test storage... 00:04:26.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:26.469 20:54:00 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:26.469 20:54:00 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:26.469 20:54:00 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:26.469 20:54:00 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:26.469 20:54:00 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.469 20:54:00 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.469 20:54:00 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.469 20:54:00 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.469 20:54:00 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.469 20:54:00 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.469 20:54:00 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.469 20:54:00 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.469 20:54:00 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.469 20:54:00 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.469 20:54:00 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.469 20:54:00 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:26.469 20:54:00 rpc -- scripts/common.sh@345 -- # : 1 00:04:26.469 20:54:00 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.469 20:54:00 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.469 20:54:00 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:26.469 20:54:00 rpc -- scripts/common.sh@353 -- # local d=1 00:04:26.469 20:54:00 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.469 20:54:00 rpc -- scripts/common.sh@355 -- # echo 1 00:04:26.469 20:54:00 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.469 20:54:00 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:26.469 20:54:00 rpc -- scripts/common.sh@353 -- # local d=2 00:04:26.469 20:54:00 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.469 20:54:00 rpc -- scripts/common.sh@355 -- # echo 2 00:04:26.469 20:54:00 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.469 20:54:00 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.470 20:54:00 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.470 20:54:00 rpc -- scripts/common.sh@368 -- # return 0 00:04:26.470 20:54:00 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.470 20:54:00 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:26.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.470 --rc genhtml_branch_coverage=1 00:04:26.470 --rc genhtml_function_coverage=1 00:04:26.470 --rc genhtml_legend=1 00:04:26.470 --rc geninfo_all_blocks=1 00:04:26.470 --rc geninfo_unexecuted_blocks=1 00:04:26.470 00:04:26.470 ' 00:04:26.470 20:54:00 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:26.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.470 --rc genhtml_branch_coverage=1 00:04:26.470 --rc genhtml_function_coverage=1 00:04:26.470 --rc genhtml_legend=1 00:04:26.470 --rc geninfo_all_blocks=1 00:04:26.470 --rc geninfo_unexecuted_blocks=1 00:04:26.470 00:04:26.470 ' 00:04:26.470 20:54:00 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:26.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.470 --rc genhtml_branch_coverage=1 00:04:26.470 --rc genhtml_function_coverage=1 00:04:26.470 --rc genhtml_legend=1 00:04:26.470 --rc geninfo_all_blocks=1 00:04:26.470 --rc geninfo_unexecuted_blocks=1 00:04:26.470 00:04:26.470 ' 00:04:26.470 20:54:00 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:26.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.470 --rc genhtml_branch_coverage=1 00:04:26.470 --rc genhtml_function_coverage=1 00:04:26.470 --rc genhtml_legend=1 00:04:26.470 --rc geninfo_all_blocks=1 00:04:26.470 --rc geninfo_unexecuted_blocks=1 00:04:26.470 00:04:26.470 ' 00:04:26.470 20:54:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2844386 00:04:26.470 20:54:00 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:26.470 20:54:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.470 20:54:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2844386 00:04:26.470 20:54:00 rpc -- common/autotest_common.sh@835 -- # '[' -z 2844386 ']' 00:04:26.470 20:54:00 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.470 20:54:00 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.470 20:54:00 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.470 20:54:00 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.470 20:54:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.728 [2024-11-19 20:54:00.333163] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:26.728 [2024-11-19 20:54:00.333323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2844386 ] 00:04:26.728 [2024-11-19 20:54:00.483962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.986 [2024-11-19 20:54:00.621286] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:26.986 [2024-11-19 20:54:00.621369] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2844386' to capture a snapshot of events at runtime. 00:04:26.986 [2024-11-19 20:54:00.621399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:26.986 [2024-11-19 20:54:00.621421] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:26.986 [2024-11-19 20:54:00.621453] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2844386 for offline analysis/debug. 00:04:26.986 [2024-11-19 20:54:00.623020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.921 20:54:01 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.921 20:54:01 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:27.921 20:54:01 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:27.921 20:54:01 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:27.921 20:54:01 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:27.921 20:54:01 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:27.921 20:54:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.921 20:54:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.921 20:54:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.921 ************************************ 00:04:27.921 START TEST rpc_integrity 00:04:27.921 ************************************ 00:04:27.921 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:27.921 20:54:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:27.921 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.921 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.921 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.921 20:54:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:27.921 20:54:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:27.921 20:54:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:27.921 20:54:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:27.921 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.921 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.921 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.921 20:54:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:27.921 20:54:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:27.921 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.921 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.921 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.921 20:54:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:27.921 { 00:04:27.921 "name": "Malloc0", 00:04:27.921 "aliases": [ 00:04:27.921 "3379e472-ed76-4f59-91ad-02715afba49b" 00:04:27.921 ], 00:04:27.921 "product_name": "Malloc disk", 00:04:27.921 "block_size": 512, 00:04:27.921 "num_blocks": 16384, 00:04:27.921 "uuid": "3379e472-ed76-4f59-91ad-02715afba49b", 00:04:27.921 "assigned_rate_limits": { 00:04:27.921 "rw_ios_per_sec": 0, 00:04:27.921 "rw_mbytes_per_sec": 0, 00:04:27.921 "r_mbytes_per_sec": 0, 00:04:27.921 "w_mbytes_per_sec": 0 00:04:27.921 }, 00:04:27.921 "claimed": false, 00:04:27.921 "zoned": false, 00:04:27.921 "supported_io_types": { 00:04:27.921 "read": true, 00:04:27.921 "write": true, 00:04:27.921 "unmap": true, 00:04:27.921 "flush": true, 00:04:27.921 "reset": true, 00:04:27.921 "nvme_admin": false, 00:04:27.921 "nvme_io": false, 00:04:27.921 "nvme_io_md": false, 00:04:27.921 "write_zeroes": true, 00:04:27.921 "zcopy": true, 00:04:27.921 "get_zone_info": false, 00:04:27.921 "zone_management": false, 00:04:27.921 "zone_append": false, 00:04:27.921 "compare": false, 00:04:27.921 "compare_and_write": false, 00:04:27.921 "abort": true, 00:04:27.921 "seek_hole": false, 00:04:27.921 "seek_data": false, 00:04:27.921 "copy": true, 00:04:27.921 "nvme_iov_md": false 00:04:27.921 }, 00:04:27.921 "memory_domains": [ 00:04:27.921 { 00:04:27.921 "dma_device_id": "system", 00:04:27.921 "dma_device_type": 1 00:04:27.921 }, 00:04:27.921 { 00:04:27.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.921 "dma_device_type": 2 00:04:27.921 } 00:04:27.921 ], 00:04:27.921 "driver_specific": {} 00:04:27.921 } 00:04:27.921 ]' 00:04:27.921 20:54:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:27.921 20:54:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:27.921 20:54:01 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:27.921 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.921 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.921 [2024-11-19 20:54:01.712162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:27.921 [2024-11-19 20:54:01.712245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:27.921 [2024-11-19 20:54:01.712297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:04:27.921 [2024-11-19 20:54:01.712322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:28.180 [2024-11-19 20:54:01.715214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:28.180 [2024-11-19 20:54:01.715252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:28.180 Passthru0 00:04:28.180 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.180 20:54:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:28.180 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.180 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.180 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.180 20:54:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:28.180 { 00:04:28.180 "name": "Malloc0", 00:04:28.180 "aliases": [ 00:04:28.180 "3379e472-ed76-4f59-91ad-02715afba49b" 00:04:28.180 ], 00:04:28.180 "product_name": "Malloc disk", 00:04:28.180 "block_size": 512, 00:04:28.180 "num_blocks": 16384, 00:04:28.180 "uuid": "3379e472-ed76-4f59-91ad-02715afba49b", 00:04:28.180 "assigned_rate_limits": { 00:04:28.180 "rw_ios_per_sec": 0, 00:04:28.180 "rw_mbytes_per_sec": 0, 00:04:28.180 "r_mbytes_per_sec": 0, 00:04:28.180 "w_mbytes_per_sec": 0 00:04:28.180 }, 00:04:28.180 "claimed": true, 00:04:28.180 "claim_type": "exclusive_write", 00:04:28.180 "zoned": false, 00:04:28.180 "supported_io_types": { 00:04:28.180 "read": true, 00:04:28.180 "write": true, 00:04:28.180 "unmap": true, 00:04:28.180 "flush": true, 00:04:28.180 "reset": true, 00:04:28.180 "nvme_admin": false, 00:04:28.180 "nvme_io": false, 00:04:28.180 "nvme_io_md": false, 00:04:28.180 "write_zeroes": true, 00:04:28.180 "zcopy": true, 00:04:28.180 "get_zone_info": false, 00:04:28.180 "zone_management": false, 00:04:28.180 "zone_append": false, 00:04:28.180 "compare": false, 00:04:28.180 "compare_and_write": false, 00:04:28.180 "abort": true, 00:04:28.180 "seek_hole": false, 00:04:28.180 "seek_data": false, 00:04:28.180 "copy": true, 00:04:28.180 "nvme_iov_md": false 00:04:28.180 }, 00:04:28.180 "memory_domains": [ 00:04:28.180 { 00:04:28.180 "dma_device_id": "system", 00:04:28.180 "dma_device_type": 1 00:04:28.180 }, 00:04:28.180 { 00:04:28.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.180 "dma_device_type": 2 00:04:28.180 } 00:04:28.180 ], 00:04:28.180 "driver_specific": {} 00:04:28.180 }, 00:04:28.180 { 00:04:28.180 "name": "Passthru0", 00:04:28.180 "aliases": [ 00:04:28.180 "ce8a4f39-8577-51d3-b810-da69d25f7475" 00:04:28.180 ], 00:04:28.180 "product_name": "passthru", 00:04:28.180 "block_size": 512, 00:04:28.180 "num_blocks": 16384, 00:04:28.180 "uuid": "ce8a4f39-8577-51d3-b810-da69d25f7475", 00:04:28.180 "assigned_rate_limits": { 00:04:28.180 "rw_ios_per_sec": 0, 00:04:28.180 "rw_mbytes_per_sec": 0, 00:04:28.180 "r_mbytes_per_sec": 0, 00:04:28.180 "w_mbytes_per_sec": 0 00:04:28.180 }, 00:04:28.180 "claimed": false, 00:04:28.180 "zoned": false, 00:04:28.180 "supported_io_types": { 00:04:28.180 "read": true, 00:04:28.180 "write": true, 00:04:28.180 "unmap": true, 00:04:28.180 "flush": true, 00:04:28.180 "reset": true, 00:04:28.180 "nvme_admin": false, 00:04:28.180 "nvme_io": false, 00:04:28.180 "nvme_io_md": false, 00:04:28.180 "write_zeroes": true, 00:04:28.180 "zcopy": true, 00:04:28.180 "get_zone_info": false, 00:04:28.180 "zone_management": false, 00:04:28.180 "zone_append": false, 00:04:28.180 "compare": false, 00:04:28.180 "compare_and_write": false, 00:04:28.180 "abort": true, 00:04:28.180 "seek_hole": false, 00:04:28.180 "seek_data": false, 00:04:28.180 "copy": true, 00:04:28.180 "nvme_iov_md": false 00:04:28.180 }, 00:04:28.180 "memory_domains": [ 00:04:28.180 { 00:04:28.180 "dma_device_id": "system", 00:04:28.180 "dma_device_type": 1 00:04:28.180 }, 00:04:28.180 { 00:04:28.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.180 "dma_device_type": 2 00:04:28.180 } 00:04:28.180 ], 00:04:28.180 "driver_specific": { 00:04:28.180 "passthru": { 00:04:28.180 "name": "Passthru0", 00:04:28.180 "base_bdev_name": "Malloc0" 00:04:28.180 } 00:04:28.180 } 00:04:28.180 } 00:04:28.180 ]' 00:04:28.180 20:54:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:28.180 20:54:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:28.180 20:54:01 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:28.180 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.180 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.180 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.180 20:54:01 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:28.180 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.180 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.180 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.180 20:54:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:28.180 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.180 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.180 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.180 20:54:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:28.180 20:54:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:28.180 20:54:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:28.180 00:04:28.180 real 0m0.261s 00:04:28.180 user 0m0.149s 00:04:28.180 sys 0m0.022s 00:04:28.180 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.180 20:54:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.180 ************************************ 00:04:28.180 END TEST rpc_integrity 00:04:28.180 ************************************ 00:04:28.180 20:54:01 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:28.180 20:54:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.180 20:54:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.180 20:54:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.180 ************************************ 00:04:28.180 START TEST rpc_plugins 00:04:28.180 ************************************ 00:04:28.180 20:54:01 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:28.180 20:54:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:28.180 20:54:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.181 20:54:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.181 20:54:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.181 20:54:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:28.181 20:54:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:28.181 20:54:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.181 20:54:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.181 20:54:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.181 20:54:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:28.181 { 00:04:28.181 "name": "Malloc1", 00:04:28.181 "aliases": [ 00:04:28.181 "4d70668e-02a3-4cde-b882-ce56bb87be49" 00:04:28.181 ], 00:04:28.181 "product_name": "Malloc disk", 00:04:28.181 "block_size": 4096, 00:04:28.181 "num_blocks": 256, 00:04:28.181 "uuid": "4d70668e-02a3-4cde-b882-ce56bb87be49", 00:04:28.181 "assigned_rate_limits": { 00:04:28.181 "rw_ios_per_sec": 0, 00:04:28.181 "rw_mbytes_per_sec": 0, 00:04:28.181 "r_mbytes_per_sec": 0, 00:04:28.181 "w_mbytes_per_sec": 0 00:04:28.181 }, 00:04:28.181 "claimed": false, 00:04:28.181 "zoned": false, 00:04:28.181 "supported_io_types": { 00:04:28.181 "read": true, 00:04:28.181 "write": true, 00:04:28.181 "unmap": true, 00:04:28.181 "flush": true, 00:04:28.181 "reset": true, 00:04:28.181 "nvme_admin": false, 00:04:28.181 "nvme_io": false, 00:04:28.181 "nvme_io_md": false, 00:04:28.181 "write_zeroes": true, 00:04:28.181 "zcopy": true, 00:04:28.181 "get_zone_info": false, 00:04:28.181 "zone_management": false, 00:04:28.181 "zone_append": false, 00:04:28.181 "compare": false, 00:04:28.181 "compare_and_write": false, 00:04:28.181 "abort": true, 00:04:28.181 "seek_hole": false, 00:04:28.181 "seek_data": false, 00:04:28.181 "copy": true, 00:04:28.181 "nvme_iov_md": false 00:04:28.181 }, 00:04:28.181 "memory_domains": [ 00:04:28.181 { 00:04:28.181 "dma_device_id": "system", 00:04:28.181 "dma_device_type": 1 00:04:28.181 }, 00:04:28.181 { 00:04:28.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.181 "dma_device_type": 2 00:04:28.181 } 00:04:28.181 ], 00:04:28.181 "driver_specific": {} 00:04:28.181 } 00:04:28.181 ]' 00:04:28.181 20:54:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:28.181 20:54:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:28.181 20:54:01 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:28.181 20:54:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.181 20:54:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.439 20:54:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.439 20:54:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:28.439 20:54:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.439 20:54:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.439 20:54:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.439 20:54:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:28.439 20:54:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:28.439 20:54:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:28.440 00:04:28.440 real 0m0.119s 00:04:28.440 user 0m0.077s 00:04:28.440 sys 0m0.009s 00:04:28.440 20:54:02 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.440 20:54:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.440 ************************************ 00:04:28.440 END TEST rpc_plugins 00:04:28.440 ************************************ 00:04:28.440 20:54:02 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:28.440 20:54:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.440 20:54:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.440 20:54:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.440 ************************************ 00:04:28.440 START TEST rpc_trace_cmd_test 00:04:28.440 ************************************ 00:04:28.440 20:54:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:28.440 20:54:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:28.440 20:54:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:28.440 20:54:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.440 20:54:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:28.440 20:54:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.440 20:54:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:28.440 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2844386", 00:04:28.440 "tpoint_group_mask": "0x8", 00:04:28.440 "iscsi_conn": { 00:04:28.440 "mask": "0x2", 00:04:28.440 "tpoint_mask": "0x0" 00:04:28.440 }, 00:04:28.440 "scsi": { 00:04:28.440 "mask": "0x4", 00:04:28.440 "tpoint_mask": "0x0" 00:04:28.440 }, 00:04:28.440 "bdev": { 00:04:28.440 "mask": "0x8", 00:04:28.440 "tpoint_mask": "0xffffffffffffffff" 00:04:28.440 }, 00:04:28.440 "nvmf_rdma": { 00:04:28.440 "mask": "0x10", 00:04:28.440 "tpoint_mask": "0x0" 00:04:28.440 }, 00:04:28.440 "nvmf_tcp": { 00:04:28.440 "mask": "0x20", 00:04:28.440 "tpoint_mask": "0x0" 00:04:28.440 }, 00:04:28.440 "ftl": { 00:04:28.440 "mask": "0x40", 00:04:28.440 "tpoint_mask": "0x0" 00:04:28.440 }, 00:04:28.440 "blobfs": { 00:04:28.440 "mask": "0x80", 00:04:28.440 "tpoint_mask": "0x0" 00:04:28.440 }, 00:04:28.440 "dsa": { 00:04:28.440 "mask": "0x200", 00:04:28.440 "tpoint_mask": "0x0" 00:04:28.440 }, 00:04:28.440 "thread": { 00:04:28.440 "mask": "0x400", 00:04:28.440 "tpoint_mask": "0x0" 00:04:28.440 }, 00:04:28.440 "nvme_pcie": { 00:04:28.440 "mask": "0x800", 00:04:28.440 "tpoint_mask": "0x0" 00:04:28.440 }, 00:04:28.440 "iaa": { 00:04:28.440 "mask": "0x1000", 00:04:28.440 "tpoint_mask": "0x0" 00:04:28.440 }, 00:04:28.440 "nvme_tcp": { 00:04:28.440 "mask": "0x2000", 00:04:28.440 "tpoint_mask": "0x0" 00:04:28.440 }, 00:04:28.440 "bdev_nvme": { 00:04:28.440 "mask": "0x4000", 00:04:28.440 "tpoint_mask": "0x0" 00:04:28.440 }, 00:04:28.440 "sock": { 00:04:28.440 "mask": "0x8000", 00:04:28.440 "tpoint_mask": "0x0" 00:04:28.440 }, 00:04:28.440 "blob": { 00:04:28.440 "mask": "0x10000", 00:04:28.440 "tpoint_mask": "0x0" 00:04:28.440 }, 00:04:28.440 "bdev_raid": { 00:04:28.440 "mask": "0x20000", 00:04:28.440 "tpoint_mask": "0x0" 00:04:28.440 }, 00:04:28.440 "scheduler": { 00:04:28.440 "mask": "0x40000", 00:04:28.440 "tpoint_mask": "0x0" 00:04:28.440 } 00:04:28.440 }' 00:04:28.440 20:54:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:28.440 20:54:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:28.440 20:54:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:28.440 20:54:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:28.440 20:54:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:28.440 20:54:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:28.440 20:54:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:28.440 20:54:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:28.440 20:54:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:28.698 20:54:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:28.698 00:04:28.698 real 0m0.189s 00:04:28.698 user 0m0.167s 00:04:28.698 sys 0m0.015s 00:04:28.698 20:54:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.698 20:54:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:28.698 ************************************ 00:04:28.698 END TEST rpc_trace_cmd_test 00:04:28.699 ************************************ 00:04:28.699 20:54:02 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:28.699 20:54:02 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:28.699 20:54:02 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:28.699 20:54:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.699 20:54:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.699 20:54:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.699 ************************************ 00:04:28.699 START TEST rpc_daemon_integrity 00:04:28.699 ************************************ 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:28.699 { 00:04:28.699 "name": "Malloc2", 00:04:28.699 "aliases": [ 00:04:28.699 "25c0caeb-a6b3-42e4-b424-1927b5ac2c63" 00:04:28.699 ], 00:04:28.699 "product_name": "Malloc disk", 00:04:28.699 "block_size": 512, 00:04:28.699 "num_blocks": 16384, 00:04:28.699 "uuid": "25c0caeb-a6b3-42e4-b424-1927b5ac2c63", 00:04:28.699 "assigned_rate_limits": { 00:04:28.699 "rw_ios_per_sec": 0, 00:04:28.699 "rw_mbytes_per_sec": 0, 00:04:28.699 "r_mbytes_per_sec": 0, 00:04:28.699 "w_mbytes_per_sec": 0 00:04:28.699 }, 00:04:28.699 "claimed": false, 00:04:28.699 "zoned": false, 00:04:28.699 "supported_io_types": { 00:04:28.699 "read": true, 00:04:28.699 "write": true, 00:04:28.699 "unmap": true, 00:04:28.699 "flush": true, 00:04:28.699 "reset": true, 00:04:28.699 "nvme_admin": false, 00:04:28.699 "nvme_io": false, 00:04:28.699 "nvme_io_md": false, 00:04:28.699 "write_zeroes": true, 00:04:28.699 "zcopy": true, 00:04:28.699 "get_zone_info": false, 00:04:28.699 "zone_management": false, 00:04:28.699 "zone_append": false, 00:04:28.699 "compare": false, 00:04:28.699 "compare_and_write": false, 00:04:28.699 "abort": true, 00:04:28.699 "seek_hole": false, 00:04:28.699 "seek_data": false, 00:04:28.699 "copy": true, 00:04:28.699 "nvme_iov_md": false 00:04:28.699 }, 00:04:28.699 "memory_domains": [ 00:04:28.699 { 00:04:28.699 "dma_device_id": "system", 00:04:28.699 "dma_device_type": 1 00:04:28.699 }, 00:04:28.699 { 00:04:28.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.699 "dma_device_type": 2 00:04:28.699 } 00:04:28.699 ], 00:04:28.699 "driver_specific": {} 00:04:28.699 } 00:04:28.699 ]' 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.699 [2024-11-19 20:54:02.418207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:28.699 [2024-11-19 20:54:02.418268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:28.699 [2024-11-19 20:54:02.418310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023a80 00:04:28.699 [2024-11-19 20:54:02.418334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:28.699 [2024-11-19 20:54:02.421136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:28.699 [2024-11-19 20:54:02.421172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:28.699 Passthru0 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:28.699 { 00:04:28.699 "name": "Malloc2", 00:04:28.699 "aliases": [ 00:04:28.699 "25c0caeb-a6b3-42e4-b424-1927b5ac2c63" 00:04:28.699 ], 00:04:28.699 "product_name": "Malloc disk", 00:04:28.699 "block_size": 512, 00:04:28.699 "num_blocks": 16384, 00:04:28.699 "uuid": "25c0caeb-a6b3-42e4-b424-1927b5ac2c63", 00:04:28.699 "assigned_rate_limits": { 00:04:28.699 "rw_ios_per_sec": 0, 00:04:28.699 "rw_mbytes_per_sec": 0, 00:04:28.699 "r_mbytes_per_sec": 0, 00:04:28.699 "w_mbytes_per_sec": 0 00:04:28.699 }, 00:04:28.699 "claimed": true, 00:04:28.699 "claim_type": "exclusive_write", 00:04:28.699 "zoned": false, 00:04:28.699 "supported_io_types": { 00:04:28.699 "read": true, 00:04:28.699 "write": true, 00:04:28.699 "unmap": true, 00:04:28.699 "flush": true, 00:04:28.699 "reset": true, 00:04:28.699 "nvme_admin": false, 00:04:28.699 "nvme_io": false, 00:04:28.699 "nvme_io_md": false, 00:04:28.699 "write_zeroes": true, 00:04:28.699 "zcopy": true, 00:04:28.699 "get_zone_info": false, 00:04:28.699 "zone_management": false, 00:04:28.699 "zone_append": false, 00:04:28.699 "compare": false, 00:04:28.699 "compare_and_write": false, 00:04:28.699 "abort": true, 00:04:28.699 "seek_hole": false, 00:04:28.699 "seek_data": false, 00:04:28.699 "copy": true, 00:04:28.699 "nvme_iov_md": false 00:04:28.699 }, 00:04:28.699 "memory_domains": [ 00:04:28.699 { 00:04:28.699 "dma_device_id": "system", 00:04:28.699 "dma_device_type": 1 00:04:28.699 }, 00:04:28.699 { 00:04:28.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.699 "dma_device_type": 2 00:04:28.699 } 00:04:28.699 ], 00:04:28.699 "driver_specific": {} 00:04:28.699 }, 00:04:28.699 { 00:04:28.699 "name": "Passthru0", 00:04:28.699 "aliases": [ 00:04:28.699 "b38f894f-7b09-5c7c-8252-6b9c34dc79cc" 00:04:28.699 ], 00:04:28.699 "product_name": "passthru", 00:04:28.699 "block_size": 512, 00:04:28.699 "num_blocks": 16384, 00:04:28.699 "uuid": "b38f894f-7b09-5c7c-8252-6b9c34dc79cc", 00:04:28.699 "assigned_rate_limits": { 00:04:28.699 "rw_ios_per_sec": 0, 00:04:28.699 "rw_mbytes_per_sec": 0, 00:04:28.699 "r_mbytes_per_sec": 0, 00:04:28.699 "w_mbytes_per_sec": 0 00:04:28.699 }, 00:04:28.699 "claimed": false, 00:04:28.699 "zoned": false, 00:04:28.699 "supported_io_types": { 00:04:28.699 "read": true, 00:04:28.699 "write": true, 00:04:28.699 "unmap": true, 00:04:28.699 "flush": true, 00:04:28.699 "reset": true, 00:04:28.699 "nvme_admin": false, 00:04:28.699 "nvme_io": false, 00:04:28.699 "nvme_io_md": false, 00:04:28.699 "write_zeroes": true, 00:04:28.699 "zcopy": true, 00:04:28.699 "get_zone_info": false, 00:04:28.699 "zone_management": false, 00:04:28.699 "zone_append": false, 00:04:28.699 "compare": false, 00:04:28.699 "compare_and_write": false, 00:04:28.699 "abort": true, 00:04:28.699 "seek_hole": false, 00:04:28.699 "seek_data": false, 00:04:28.699 "copy": true, 00:04:28.699 "nvme_iov_md": false 00:04:28.699 }, 00:04:28.699 "memory_domains": [ 00:04:28.699 { 00:04:28.699 "dma_device_id": "system", 00:04:28.699 "dma_device_type": 1 00:04:28.699 }, 00:04:28.699 { 00:04:28.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.699 "dma_device_type": 2 00:04:28.699 } 00:04:28.699 ], 00:04:28.699 "driver_specific": { 00:04:28.699 "passthru": { 00:04:28.699 "name": "Passthru0", 00:04:28.699 "base_bdev_name": "Malloc2" 00:04:28.699 } 00:04:28.699 } 00:04:28.699 } 00:04:28.699 ]' 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.699 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.958 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.958 20:54:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:28.958 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.958 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.958 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.958 20:54:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:28.958 20:54:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:28.958 20:54:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:28.958 00:04:28.958 real 0m0.253s 00:04:28.958 user 0m0.157s 00:04:28.958 sys 0m0.014s 00:04:28.958 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.958 20:54:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.958 ************************************ 00:04:28.958 END TEST rpc_daemon_integrity 00:04:28.958 ************************************ 00:04:28.958 20:54:02 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:28.958 20:54:02 rpc -- rpc/rpc.sh@84 -- # killprocess 2844386 00:04:28.958 20:54:02 rpc -- common/autotest_common.sh@954 -- # '[' -z 2844386 ']' 00:04:28.958 20:54:02 rpc -- common/autotest_common.sh@958 -- # kill -0 2844386 00:04:28.958 20:54:02 rpc -- common/autotest_common.sh@959 -- # uname 00:04:28.958 20:54:02 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.958 20:54:02 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2844386 00:04:28.958 20:54:02 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.958 20:54:02 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.958 20:54:02 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2844386' 00:04:28.958 killing process with pid 2844386 00:04:28.958 20:54:02 rpc -- common/autotest_common.sh@973 -- # kill 2844386 00:04:28.958 20:54:02 rpc -- common/autotest_common.sh@978 -- # wait 2844386 00:04:31.489 00:04:31.489 real 0m4.963s 00:04:31.489 user 0m5.473s 00:04:31.489 sys 0m0.867s 00:04:31.489 20:54:05 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.489 20:54:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.489 ************************************ 00:04:31.489 END TEST rpc 00:04:31.489 ************************************ 00:04:31.489 20:54:05 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:31.489 20:54:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.489 20:54:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.489 20:54:05 -- common/autotest_common.sh@10 -- # set +x 00:04:31.489 ************************************ 00:04:31.489 START TEST skip_rpc 00:04:31.489 ************************************ 00:04:31.489 20:54:05 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:31.489 * Looking for test storage... 00:04:31.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:31.489 20:54:05 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:31.489 20:54:05 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:31.489 20:54:05 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:31.489 20:54:05 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.489 20:54:05 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:31.489 20:54:05 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.489 20:54:05 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:31.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.489 --rc genhtml_branch_coverage=1 00:04:31.489 --rc genhtml_function_coverage=1 00:04:31.489 --rc genhtml_legend=1 00:04:31.489 --rc geninfo_all_blocks=1 00:04:31.489 --rc geninfo_unexecuted_blocks=1 00:04:31.489 00:04:31.489 ' 00:04:31.489 20:54:05 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:31.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.489 --rc genhtml_branch_coverage=1 00:04:31.489 --rc genhtml_function_coverage=1 00:04:31.489 --rc genhtml_legend=1 00:04:31.489 --rc geninfo_all_blocks=1 00:04:31.489 --rc geninfo_unexecuted_blocks=1 00:04:31.489 00:04:31.489 ' 00:04:31.489 20:54:05 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:31.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.489 --rc genhtml_branch_coverage=1 00:04:31.489 --rc genhtml_function_coverage=1 00:04:31.489 --rc genhtml_legend=1 00:04:31.489 --rc geninfo_all_blocks=1 00:04:31.489 --rc geninfo_unexecuted_blocks=1 00:04:31.489 00:04:31.489 ' 00:04:31.489 20:54:05 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:31.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.489 --rc genhtml_branch_coverage=1 00:04:31.489 --rc genhtml_function_coverage=1 00:04:31.489 --rc genhtml_legend=1 00:04:31.489 --rc geninfo_all_blocks=1 00:04:31.489 --rc geninfo_unexecuted_blocks=1 00:04:31.489 00:04:31.489 ' 00:04:31.489 20:54:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:31.489 20:54:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:31.489 20:54:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:31.489 20:54:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.489 20:54:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.489 20:54:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.489 ************************************ 00:04:31.489 START TEST skip_rpc 00:04:31.489 ************************************ 00:04:31.489 20:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:31.489 20:54:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2845197 00:04:31.490 20:54:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:31.490 20:54:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.490 20:54:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:31.748 [2024-11-19 20:54:05.363395] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:31.748 [2024-11-19 20:54:05.363544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2845197 ] 00:04:31.748 [2024-11-19 20:54:05.511185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.006 [2024-11-19 20:54:05.650377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2845197 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2845197 ']' 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2845197 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2845197 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.271 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2845197' 00:04:37.271 killing process with pid 2845197 00:04:37.272 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2845197 00:04:37.272 20:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2845197 00:04:39.174 00:04:39.174 real 0m7.472s 00:04:39.174 user 0m6.952s 00:04:39.174 sys 0m0.509s 00:04:39.174 20:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.174 20:54:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.174 ************************************ 00:04:39.174 END TEST skip_rpc 00:04:39.174 ************************************ 00:04:39.174 20:54:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:39.174 20:54:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.174 20:54:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.174 20:54:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.174 ************************************ 00:04:39.174 START TEST skip_rpc_with_json 00:04:39.174 ************************************ 00:04:39.174 20:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:39.174 20:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:39.174 20:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2846653 00:04:39.174 20:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.174 20:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.174 20:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2846653 00:04:39.174 20:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2846653 ']' 00:04:39.174 20:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.174 20:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.174 20:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.174 20:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.174 20:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.174 [2024-11-19 20:54:12.885125] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:39.174 [2024-11-19 20:54:12.885290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846653 ] 00:04:39.433 [2024-11-19 20:54:13.022344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.433 [2024-11-19 20:54:13.154826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.369 20:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.369 20:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:40.369 20:54:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:40.369 20:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.369 20:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.369 [2024-11-19 20:54:14.117767] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:40.369 request: 00:04:40.369 { 00:04:40.369 "trtype": "tcp", 00:04:40.369 "method": "nvmf_get_transports", 00:04:40.369 "req_id": 1 00:04:40.369 } 00:04:40.369 Got JSON-RPC error response 00:04:40.369 response: 00:04:40.369 { 00:04:40.369 "code": -19, 00:04:40.369 "message": "No such device" 00:04:40.369 } 00:04:40.369 20:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:40.369 20:54:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:40.369 20:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.369 20:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.369 [2024-11-19 20:54:14.125914] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:40.369 20:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.369 20:54:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:40.369 20:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.369 20:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.628 20:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.628 20:54:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:40.628 { 00:04:40.628 "subsystems": [ 00:04:40.628 { 00:04:40.628 "subsystem": "fsdev", 00:04:40.628 "config": [ 00:04:40.628 { 00:04:40.628 "method": "fsdev_set_opts", 00:04:40.628 "params": { 00:04:40.628 "fsdev_io_pool_size": 65535, 00:04:40.628 "fsdev_io_cache_size": 256 00:04:40.628 } 00:04:40.628 } 00:04:40.628 ] 00:04:40.628 }, 00:04:40.628 { 00:04:40.628 "subsystem": "keyring", 00:04:40.628 "config": [] 00:04:40.628 }, 00:04:40.628 { 00:04:40.628 "subsystem": "iobuf", 00:04:40.628 "config": [ 00:04:40.628 { 00:04:40.628 "method": "iobuf_set_options", 00:04:40.628 "params": { 00:04:40.628 "small_pool_count": 8192, 00:04:40.628 "large_pool_count": 1024, 00:04:40.628 "small_bufsize": 8192, 00:04:40.628 "large_bufsize": 135168, 00:04:40.628 "enable_numa": false 00:04:40.628 } 00:04:40.628 } 00:04:40.628 ] 00:04:40.628 }, 00:04:40.628 { 00:04:40.628 "subsystem": "sock", 00:04:40.628 "config": [ 00:04:40.628 { 00:04:40.628 "method": "sock_set_default_impl", 00:04:40.628 "params": { 00:04:40.628 "impl_name": "posix" 00:04:40.628 } 00:04:40.628 }, 00:04:40.628 { 00:04:40.628 "method": "sock_impl_set_options", 00:04:40.628 "params": { 00:04:40.628 "impl_name": "ssl", 00:04:40.628 "recv_buf_size": 4096, 00:04:40.628 "send_buf_size": 4096, 00:04:40.628 "enable_recv_pipe": true, 00:04:40.628 "enable_quickack": false, 00:04:40.628 "enable_placement_id": 0, 00:04:40.628 "enable_zerocopy_send_server": true, 00:04:40.628 "enable_zerocopy_send_client": false, 00:04:40.628 "zerocopy_threshold": 0, 00:04:40.628 "tls_version": 0, 00:04:40.628 "enable_ktls": false 00:04:40.628 } 00:04:40.628 }, 00:04:40.628 { 00:04:40.628 "method": "sock_impl_set_options", 00:04:40.628 "params": { 00:04:40.628 "impl_name": "posix", 00:04:40.628 "recv_buf_size": 2097152, 00:04:40.628 "send_buf_size": 2097152, 00:04:40.628 "enable_recv_pipe": true, 00:04:40.628 "enable_quickack": false, 00:04:40.628 "enable_placement_id": 0, 00:04:40.628 "enable_zerocopy_send_server": true, 00:04:40.628 "enable_zerocopy_send_client": false, 00:04:40.628 "zerocopy_threshold": 0, 00:04:40.628 "tls_version": 0, 00:04:40.628 "enable_ktls": false 00:04:40.628 } 00:04:40.628 } 00:04:40.628 ] 00:04:40.628 }, 00:04:40.628 { 00:04:40.628 "subsystem": "vmd", 00:04:40.628 "config": [] 00:04:40.628 }, 00:04:40.628 { 00:04:40.628 "subsystem": "accel", 00:04:40.628 "config": [ 00:04:40.628 { 00:04:40.628 "method": "accel_set_options", 00:04:40.628 "params": { 00:04:40.628 "small_cache_size": 128, 00:04:40.628 "large_cache_size": 16, 00:04:40.628 "task_count": 2048, 00:04:40.628 "sequence_count": 2048, 00:04:40.628 "buf_count": 2048 00:04:40.628 } 00:04:40.628 } 00:04:40.628 ] 00:04:40.628 }, 00:04:40.628 { 00:04:40.628 "subsystem": "bdev", 00:04:40.628 "config": [ 00:04:40.628 { 00:04:40.628 "method": "bdev_set_options", 00:04:40.628 "params": { 00:04:40.628 "bdev_io_pool_size": 65535, 00:04:40.628 "bdev_io_cache_size": 256, 00:04:40.628 "bdev_auto_examine": true, 00:04:40.628 "iobuf_small_cache_size": 128, 00:04:40.628 "iobuf_large_cache_size": 16 00:04:40.628 } 00:04:40.628 }, 00:04:40.628 { 00:04:40.628 "method": "bdev_raid_set_options", 00:04:40.629 "params": { 00:04:40.629 "process_window_size_kb": 1024, 00:04:40.629 "process_max_bandwidth_mb_sec": 0 00:04:40.629 } 00:04:40.629 }, 00:04:40.629 { 00:04:40.629 "method": "bdev_iscsi_set_options", 00:04:40.629 "params": { 00:04:40.629 "timeout_sec": 30 00:04:40.629 } 00:04:40.629 }, 00:04:40.629 { 00:04:40.629 "method": "bdev_nvme_set_options", 00:04:40.629 "params": { 00:04:40.629 "action_on_timeout": "none", 00:04:40.629 "timeout_us": 0, 00:04:40.629 "timeout_admin_us": 0, 00:04:40.629 "keep_alive_timeout_ms": 10000, 00:04:40.629 "arbitration_burst": 0, 00:04:40.629 "low_priority_weight": 0, 00:04:40.629 "medium_priority_weight": 0, 00:04:40.629 "high_priority_weight": 0, 00:04:40.629 "nvme_adminq_poll_period_us": 10000, 00:04:40.629 "nvme_ioq_poll_period_us": 0, 00:04:40.629 "io_queue_requests": 0, 00:04:40.629 "delay_cmd_submit": true, 00:04:40.629 "transport_retry_count": 4, 00:04:40.629 "bdev_retry_count": 3, 00:04:40.629 "transport_ack_timeout": 0, 00:04:40.629 "ctrlr_loss_timeout_sec": 0, 00:04:40.629 "reconnect_delay_sec": 0, 00:04:40.629 "fast_io_fail_timeout_sec": 0, 00:04:40.629 "disable_auto_failback": false, 00:04:40.629 "generate_uuids": false, 00:04:40.629 "transport_tos": 0, 00:04:40.629 "nvme_error_stat": false, 00:04:40.629 "rdma_srq_size": 0, 00:04:40.629 "io_path_stat": false, 00:04:40.629 "allow_accel_sequence": false, 00:04:40.629 "rdma_max_cq_size": 0, 00:04:40.629 "rdma_cm_event_timeout_ms": 0, 00:04:40.629 "dhchap_digests": [ 00:04:40.629 "sha256", 00:04:40.629 "sha384", 00:04:40.629 "sha512" 00:04:40.629 ], 00:04:40.629 "dhchap_dhgroups": [ 00:04:40.629 "null", 00:04:40.629 "ffdhe2048", 00:04:40.629 "ffdhe3072", 00:04:40.629 "ffdhe4096", 00:04:40.629 "ffdhe6144", 00:04:40.629 "ffdhe8192" 00:04:40.629 ] 00:04:40.629 } 00:04:40.629 }, 00:04:40.629 { 00:04:40.629 "method": "bdev_nvme_set_hotplug", 00:04:40.629 "params": { 00:04:40.629 "period_us": 100000, 00:04:40.629 "enable": false 00:04:40.629 } 00:04:40.629 }, 00:04:40.629 { 00:04:40.629 "method": "bdev_wait_for_examine" 00:04:40.629 } 00:04:40.629 ] 00:04:40.629 }, 00:04:40.629 { 00:04:40.629 "subsystem": "scsi", 00:04:40.629 "config": null 00:04:40.629 }, 00:04:40.629 { 00:04:40.629 "subsystem": "scheduler", 00:04:40.629 "config": [ 00:04:40.629 { 00:04:40.629 "method": "framework_set_scheduler", 00:04:40.629 "params": { 00:04:40.629 "name": "static" 00:04:40.629 } 00:04:40.629 } 00:04:40.629 ] 00:04:40.629 }, 00:04:40.629 { 00:04:40.629 "subsystem": "vhost_scsi", 00:04:40.629 "config": [] 00:04:40.629 }, 00:04:40.629 { 00:04:40.629 "subsystem": "vhost_blk", 00:04:40.629 "config": [] 00:04:40.629 }, 00:04:40.629 { 00:04:40.629 "subsystem": "ublk", 00:04:40.629 "config": [] 00:04:40.629 }, 00:04:40.629 { 00:04:40.629 "subsystem": "nbd", 00:04:40.629 "config": [] 00:04:40.629 }, 00:04:40.629 { 00:04:40.629 "subsystem": "nvmf", 00:04:40.629 "config": [ 00:04:40.629 { 00:04:40.629 "method": "nvmf_set_config", 00:04:40.629 "params": { 00:04:40.629 "discovery_filter": "match_any", 00:04:40.629 "admin_cmd_passthru": { 00:04:40.629 "identify_ctrlr": false 00:04:40.629 }, 00:04:40.629 "dhchap_digests": [ 00:04:40.629 "sha256", 00:04:40.629 "sha384", 00:04:40.629 "sha512" 00:04:40.629 ], 00:04:40.629 "dhchap_dhgroups": [ 00:04:40.629 "null", 00:04:40.629 "ffdhe2048", 00:04:40.629 "ffdhe3072", 00:04:40.629 "ffdhe4096", 00:04:40.629 "ffdhe6144", 00:04:40.629 "ffdhe8192" 00:04:40.629 ] 00:04:40.629 } 00:04:40.629 }, 00:04:40.629 { 00:04:40.629 "method": "nvmf_set_max_subsystems", 00:04:40.629 "params": { 00:04:40.629 "max_subsystems": 1024 00:04:40.629 } 00:04:40.629 }, 00:04:40.629 { 00:04:40.629 "method": "nvmf_set_crdt", 00:04:40.629 "params": { 00:04:40.629 "crdt1": 0, 00:04:40.629 "crdt2": 0, 00:04:40.629 "crdt3": 0 00:04:40.629 } 00:04:40.629 }, 00:04:40.629 { 00:04:40.629 "method": "nvmf_create_transport", 00:04:40.629 "params": { 00:04:40.629 "trtype": "TCP", 00:04:40.629 "max_queue_depth": 128, 00:04:40.629 "max_io_qpairs_per_ctrlr": 127, 00:04:40.629 "in_capsule_data_size": 4096, 00:04:40.629 "max_io_size": 131072, 00:04:40.629 "io_unit_size": 131072, 00:04:40.629 "max_aq_depth": 128, 00:04:40.629 "num_shared_buffers": 511, 00:04:40.629 "buf_cache_size": 4294967295, 00:04:40.629 "dif_insert_or_strip": false, 00:04:40.629 "zcopy": false, 00:04:40.629 "c2h_success": true, 00:04:40.629 "sock_priority": 0, 00:04:40.629 "abort_timeout_sec": 1, 00:04:40.629 "ack_timeout": 0, 00:04:40.629 "data_wr_pool_size": 0 00:04:40.629 } 00:04:40.629 } 00:04:40.629 ] 00:04:40.629 }, 00:04:40.629 { 00:04:40.629 "subsystem": "iscsi", 00:04:40.629 "config": [ 00:04:40.629 { 00:04:40.629 "method": "iscsi_set_options", 00:04:40.629 "params": { 00:04:40.629 "node_base": "iqn.2016-06.io.spdk", 00:04:40.629 "max_sessions": 128, 00:04:40.629 "max_connections_per_session": 2, 00:04:40.629 "max_queue_depth": 64, 00:04:40.629 "default_time2wait": 2, 00:04:40.629 "default_time2retain": 20, 00:04:40.629 "first_burst_length": 8192, 00:04:40.629 "immediate_data": true, 00:04:40.629 "allow_duplicated_isid": false, 00:04:40.629 "error_recovery_level": 0, 00:04:40.629 "nop_timeout": 60, 00:04:40.629 "nop_in_interval": 30, 00:04:40.629 "disable_chap": false, 00:04:40.629 "require_chap": false, 00:04:40.629 "mutual_chap": false, 00:04:40.629 "chap_group": 0, 00:04:40.629 "max_large_datain_per_connection": 64, 00:04:40.629 "max_r2t_per_connection": 4, 00:04:40.629 "pdu_pool_size": 36864, 00:04:40.629 "immediate_data_pool_size": 16384, 00:04:40.629 "data_out_pool_size": 2048 00:04:40.629 } 00:04:40.629 } 00:04:40.629 ] 00:04:40.629 } 00:04:40.629 ] 00:04:40.629 } 00:04:40.629 20:54:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:40.629 20:54:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2846653 00:04:40.629 20:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2846653 ']' 00:04:40.629 20:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2846653 00:04:40.629 20:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:40.629 20:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.629 20:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2846653 00:04:40.629 20:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.629 20:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.629 20:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2846653' 00:04:40.629 killing process with pid 2846653 00:04:40.629 20:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2846653 00:04:40.629 20:54:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2846653 00:04:43.161 20:54:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2847079 00:04:43.161 20:54:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:43.161 20:54:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:48.424 20:54:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2847079 00:04:48.424 20:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2847079 ']' 00:04:48.424 20:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2847079 00:04:48.424 20:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:48.424 20:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.424 20:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2847079 00:04:48.425 20:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.425 20:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.425 20:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2847079' 00:04:48.425 killing process with pid 2847079 00:04:48.425 20:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2847079 00:04:48.425 20:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2847079 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:51.013 00:04:51.013 real 0m11.400s 00:04:51.013 user 0m10.912s 00:04:51.013 sys 0m1.100s 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.013 ************************************ 00:04:51.013 END TEST skip_rpc_with_json 00:04:51.013 ************************************ 00:04:51.013 20:54:24 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:51.013 20:54:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.013 20:54:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.013 20:54:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.013 ************************************ 00:04:51.013 START TEST skip_rpc_with_delay 00:04:51.013 ************************************ 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:51.013 [2024-11-19 20:54:24.342624] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:51.013 00:04:51.013 real 0m0.151s 00:04:51.013 user 0m0.075s 00:04:51.013 sys 0m0.076s 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.013 20:54:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:51.013 ************************************ 00:04:51.013 END TEST skip_rpc_with_delay 00:04:51.013 ************************************ 00:04:51.013 20:54:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:51.013 20:54:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:51.013 20:54:24 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:51.013 20:54:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.013 20:54:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.013 20:54:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.013 ************************************ 00:04:51.013 START TEST exit_on_failed_rpc_init 00:04:51.013 ************************************ 00:04:51.013 20:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:51.014 20:54:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2848046 00:04:51.014 20:54:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.014 20:54:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2848046 00:04:51.014 20:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2848046 ']' 00:04:51.014 20:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.014 20:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.014 20:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.014 20:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.014 20:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.014 [2024-11-19 20:54:24.543795] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:51.014 [2024-11-19 20:54:24.543939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848046 ] 00:04:51.014 [2024-11-19 20:54:24.679317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.274 [2024-11-19 20:54:24.813929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.207 20:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.207 20:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:52.207 20:54:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.207 20:54:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.207 20:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:52.207 20:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.207 20:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.207 20:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.207 20:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.207 20:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.207 20:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.207 20:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.207 20:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.208 20:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:52.208 20:54:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.208 [2024-11-19 20:54:25.889711] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:52.208 [2024-11-19 20:54:25.889882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848196 ] 00:04:52.465 [2024-11-19 20:54:26.049017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.465 [2024-11-19 20:54:26.186925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.465 [2024-11-19 20:54:26.187096] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:52.465 [2024-11-19 20:54:26.187147] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:52.465 [2024-11-19 20:54:26.187166] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:52.723 20:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:52.723 20:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:52.723 20:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:52.723 20:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:52.723 20:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:52.723 20:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:52.723 20:54:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:52.723 20:54:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2848046 00:04:52.723 20:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2848046 ']' 00:04:52.723 20:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2848046 00:04:52.723 20:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:52.723 20:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.723 20:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2848046 00:04:52.723 20:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.723 20:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.723 20:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2848046' 00:04:52.723 killing process with pid 2848046 00:04:52.723 20:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2848046 00:04:52.723 20:54:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2848046 00:04:55.253 00:04:55.253 real 0m4.489s 00:04:55.253 user 0m5.014s 00:04:55.253 sys 0m0.761s 00:04:55.253 20:54:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.253 20:54:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.253 ************************************ 00:04:55.253 END TEST exit_on_failed_rpc_init 00:04:55.253 ************************************ 00:04:55.253 20:54:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:55.253 00:04:55.253 real 0m23.856s 00:04:55.253 user 0m23.145s 00:04:55.253 sys 0m2.618s 00:04:55.253 20:54:28 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.253 20:54:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.253 ************************************ 00:04:55.253 END TEST skip_rpc 00:04:55.253 ************************************ 00:04:55.253 20:54:28 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:55.253 20:54:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.253 20:54:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.253 20:54:28 -- common/autotest_common.sh@10 -- # set +x 00:04:55.253 ************************************ 00:04:55.253 START TEST rpc_client 00:04:55.253 ************************************ 00:04:55.253 20:54:29 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:55.511 * Looking for test storage... 00:04:55.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:55.511 20:54:29 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:55.511 20:54:29 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:55.511 20:54:29 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:55.511 20:54:29 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.511 20:54:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:55.511 20:54:29 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.511 20:54:29 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:55.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.511 --rc genhtml_branch_coverage=1 00:04:55.511 --rc genhtml_function_coverage=1 00:04:55.511 --rc genhtml_legend=1 00:04:55.511 --rc geninfo_all_blocks=1 00:04:55.511 --rc geninfo_unexecuted_blocks=1 00:04:55.511 00:04:55.511 ' 00:04:55.511 20:54:29 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:55.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.511 --rc genhtml_branch_coverage=1 00:04:55.511 --rc genhtml_function_coverage=1 00:04:55.511 --rc genhtml_legend=1 00:04:55.511 --rc geninfo_all_blocks=1 00:04:55.511 --rc geninfo_unexecuted_blocks=1 00:04:55.511 00:04:55.511 ' 00:04:55.511 20:54:29 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:55.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.511 --rc genhtml_branch_coverage=1 00:04:55.511 --rc genhtml_function_coverage=1 00:04:55.511 --rc genhtml_legend=1 00:04:55.511 --rc geninfo_all_blocks=1 00:04:55.511 --rc geninfo_unexecuted_blocks=1 00:04:55.511 00:04:55.511 ' 00:04:55.511 20:54:29 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:55.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.511 --rc genhtml_branch_coverage=1 00:04:55.511 --rc genhtml_function_coverage=1 00:04:55.511 --rc genhtml_legend=1 00:04:55.511 --rc geninfo_all_blocks=1 00:04:55.511 --rc geninfo_unexecuted_blocks=1 00:04:55.511 00:04:55.511 ' 00:04:55.511 20:54:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:55.511 OK 00:04:55.511 20:54:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:55.511 00:04:55.511 real 0m0.188s 00:04:55.511 user 0m0.116s 00:04:55.511 sys 0m0.081s 00:04:55.511 20:54:29 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.511 20:54:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:55.511 ************************************ 00:04:55.511 END TEST rpc_client 00:04:55.511 ************************************ 00:04:55.511 20:54:29 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:55.511 20:54:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.511 20:54:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.511 20:54:29 -- common/autotest_common.sh@10 -- # set +x 00:04:55.511 ************************************ 00:04:55.511 START TEST json_config 00:04:55.511 ************************************ 00:04:55.511 20:54:29 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:55.511 20:54:29 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:55.511 20:54:29 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:55.511 20:54:29 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:55.770 20:54:29 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:55.770 20:54:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.770 20:54:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.770 20:54:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.770 20:54:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.770 20:54:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.770 20:54:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.770 20:54:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.770 20:54:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.770 20:54:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.770 20:54:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.770 20:54:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.770 20:54:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:55.770 20:54:29 json_config -- scripts/common.sh@345 -- # : 1 00:04:55.770 20:54:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.770 20:54:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.770 20:54:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:55.770 20:54:29 json_config -- scripts/common.sh@353 -- # local d=1 00:04:55.770 20:54:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.770 20:54:29 json_config -- scripts/common.sh@355 -- # echo 1 00:04:55.770 20:54:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.770 20:54:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:55.770 20:54:29 json_config -- scripts/common.sh@353 -- # local d=2 00:04:55.771 20:54:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.771 20:54:29 json_config -- scripts/common.sh@355 -- # echo 2 00:04:55.771 20:54:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.771 20:54:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.771 20:54:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.771 20:54:29 json_config -- scripts/common.sh@368 -- # return 0 00:04:55.771 20:54:29 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.771 20:54:29 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:55.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.771 --rc genhtml_branch_coverage=1 00:04:55.771 --rc genhtml_function_coverage=1 00:04:55.771 --rc genhtml_legend=1 00:04:55.771 --rc geninfo_all_blocks=1 00:04:55.771 --rc geninfo_unexecuted_blocks=1 00:04:55.771 00:04:55.771 ' 00:04:55.771 20:54:29 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:55.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.771 --rc genhtml_branch_coverage=1 00:04:55.771 --rc genhtml_function_coverage=1 00:04:55.771 --rc genhtml_legend=1 00:04:55.771 --rc geninfo_all_blocks=1 00:04:55.771 --rc geninfo_unexecuted_blocks=1 00:04:55.771 00:04:55.771 ' 00:04:55.771 20:54:29 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:55.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.771 --rc genhtml_branch_coverage=1 00:04:55.771 --rc genhtml_function_coverage=1 00:04:55.771 --rc genhtml_legend=1 00:04:55.771 --rc geninfo_all_blocks=1 00:04:55.771 --rc geninfo_unexecuted_blocks=1 00:04:55.771 00:04:55.771 ' 00:04:55.771 20:54:29 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:55.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.771 --rc genhtml_branch_coverage=1 00:04:55.771 --rc genhtml_function_coverage=1 00:04:55.771 --rc genhtml_legend=1 00:04:55.771 --rc geninfo_all_blocks=1 00:04:55.771 --rc geninfo_unexecuted_blocks=1 00:04:55.771 00:04:55.771 ' 00:04:55.771 20:54:29 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:55.771 20:54:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:55.771 20:54:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.771 20:54:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.771 20:54:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.771 20:54:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.771 20:54:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.771 20:54:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.771 20:54:29 json_config -- paths/export.sh@5 -- # export PATH 00:04:55.771 20:54:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@51 -- # : 0 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:55.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:55.771 20:54:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:55.771 20:54:29 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:55.771 20:54:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:55.771 20:54:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:55.771 20:54:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:55.771 20:54:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:55.771 20:54:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:55.771 20:54:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:55.771 20:54:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:55.771 20:54:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:55.772 20:54:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:55.772 20:54:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:55.772 20:54:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:55.772 20:54:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:55.772 20:54:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:55.772 20:54:29 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:55.772 20:54:29 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:55.772 INFO: JSON configuration test init 00:04:55.772 20:54:29 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:55.772 20:54:29 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:55.772 20:54:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:55.772 20:54:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.772 20:54:29 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:55.772 20:54:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:55.772 20:54:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.772 20:54:29 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:55.772 20:54:29 json_config -- json_config/common.sh@9 -- # local app=target 00:04:55.772 20:54:29 json_config -- json_config/common.sh@10 -- # shift 00:04:55.772 20:54:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:55.772 20:54:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:55.772 20:54:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:55.772 20:54:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.772 20:54:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.772 20:54:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2848841 00:04:55.772 20:54:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:55.772 20:54:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:55.772 Waiting for target to run... 00:04:55.772 20:54:29 json_config -- json_config/common.sh@25 -- # waitforlisten 2848841 /var/tmp/spdk_tgt.sock 00:04:55.772 20:54:29 json_config -- common/autotest_common.sh@835 -- # '[' -z 2848841 ']' 00:04:55.772 20:54:29 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:55.772 20:54:29 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.772 20:54:29 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:55.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:55.772 20:54:29 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.772 20:54:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.772 [2024-11-19 20:54:29.492615] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:04:55.772 [2024-11-19 20:54:29.492763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848841 ] 00:04:56.339 [2024-11-19 20:54:29.934743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.339 [2024-11-19 20:54:30.064331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.905 20:54:30 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.905 20:54:30 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:56.905 20:54:30 json_config -- json_config/common.sh@26 -- # echo '' 00:04:56.905 00:04:56.905 20:54:30 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:56.905 20:54:30 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:56.905 20:54:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.905 20:54:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.905 20:54:30 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:56.905 20:54:30 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:56.905 20:54:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:56.905 20:54:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.905 20:54:30 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:56.905 20:54:30 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:56.905 20:54:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:01.091 20:54:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:01.091 20:54:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:01.091 20:54:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@54 -- # sort 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:01.091 20:54:34 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.091 20:54:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:01.091 20:54:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:01.091 20:54:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:01.091 20:54:34 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:01.091 20:54:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:01.349 MallocForNvmf0 00:05:01.349 20:54:34 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:01.349 20:54:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:01.607 MallocForNvmf1 00:05:01.607 20:54:35 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:01.608 20:54:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:01.865 [2024-11-19 20:54:35.497435] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:01.865 20:54:35 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:01.865 20:54:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:02.124 20:54:35 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:02.124 20:54:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:02.381 20:54:36 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:02.381 20:54:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:02.639 20:54:36 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:02.639 20:54:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:02.896 [2024-11-19 20:54:36.589262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:02.896 20:54:36 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:02.896 20:54:36 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:02.896 20:54:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.896 20:54:36 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:02.896 20:54:36 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:02.896 20:54:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.896 20:54:36 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:02.896 20:54:36 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:02.896 20:54:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:03.154 MallocBdevForConfigChangeCheck 00:05:03.154 20:54:36 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:03.154 20:54:36 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:03.154 20:54:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.413 20:54:36 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:03.413 20:54:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:03.671 20:54:37 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:03.671 INFO: shutting down applications... 00:05:03.671 20:54:37 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:03.671 20:54:37 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:03.671 20:54:37 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:03.671 20:54:37 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:05.574 Calling clear_iscsi_subsystem 00:05:05.574 Calling clear_nvmf_subsystem 00:05:05.574 Calling clear_nbd_subsystem 00:05:05.574 Calling clear_ublk_subsystem 00:05:05.574 Calling clear_vhost_blk_subsystem 00:05:05.574 Calling clear_vhost_scsi_subsystem 00:05:05.574 Calling clear_bdev_subsystem 00:05:05.575 20:54:39 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:05.575 20:54:39 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:05.575 20:54:39 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:05.575 20:54:39 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:05.575 20:54:39 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:05.575 20:54:39 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:05.832 20:54:39 json_config -- json_config/json_config.sh@352 -- # break 00:05:05.832 20:54:39 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:05.832 20:54:39 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:05.832 20:54:39 json_config -- json_config/common.sh@31 -- # local app=target 00:05:05.832 20:54:39 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:05.832 20:54:39 json_config -- json_config/common.sh@35 -- # [[ -n 2848841 ]] 00:05:05.832 20:54:39 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2848841 00:05:05.832 20:54:39 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:05.832 20:54:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.832 20:54:39 json_config -- json_config/common.sh@41 -- # kill -0 2848841 00:05:05.832 20:54:39 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.398 20:54:39 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.398 20:54:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.398 20:54:39 json_config -- json_config/common.sh@41 -- # kill -0 2848841 00:05:06.398 20:54:39 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.657 20:54:40 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.657 20:54:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.915 20:54:40 json_config -- json_config/common.sh@41 -- # kill -0 2848841 00:05:06.915 20:54:40 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:06.915 20:54:40 json_config -- json_config/common.sh@43 -- # break 00:05:06.915 20:54:40 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:06.915 20:54:40 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:06.915 SPDK target shutdown done 00:05:06.915 20:54:40 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:06.915 INFO: relaunching applications... 00:05:06.915 20:54:40 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.915 20:54:40 json_config -- json_config/common.sh@9 -- # local app=target 00:05:06.915 20:54:40 json_config -- json_config/common.sh@10 -- # shift 00:05:06.915 20:54:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:06.915 20:54:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:06.915 20:54:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:06.915 20:54:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.915 20:54:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.915 20:54:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2850185 00:05:06.915 20:54:40 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.915 20:54:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:06.915 Waiting for target to run... 00:05:06.915 20:54:40 json_config -- json_config/common.sh@25 -- # waitforlisten 2850185 /var/tmp/spdk_tgt.sock 00:05:06.915 20:54:40 json_config -- common/autotest_common.sh@835 -- # '[' -z 2850185 ']' 00:05:06.915 20:54:40 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:06.915 20:54:40 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.915 20:54:40 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:06.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:06.915 20:54:40 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.915 20:54:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.915 [2024-11-19 20:54:40.555388] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:06.915 [2024-11-19 20:54:40.555540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2850185 ] 00:05:07.481 [2024-11-19 20:54:41.195889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.739 [2024-11-19 20:54:41.323902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.923 [2024-11-19 20:54:45.114327] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:11.923 [2024-11-19 20:54:45.146926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:11.923 20:54:45 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.923 20:54:45 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:11.923 20:54:45 json_config -- json_config/common.sh@26 -- # echo '' 00:05:11.923 00:05:11.923 20:54:45 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:11.923 20:54:45 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:11.923 INFO: Checking if target configuration is the same... 00:05:11.923 20:54:45 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.923 20:54:45 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:11.923 20:54:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.923 + '[' 2 -ne 2 ']' 00:05:11.923 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:11.923 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:11.923 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:11.923 +++ basename /dev/fd/62 00:05:11.923 ++ mktemp /tmp/62.XXX 00:05:11.923 + tmp_file_1=/tmp/62.G2O 00:05:11.923 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.923 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:11.923 + tmp_file_2=/tmp/spdk_tgt_config.json.FI0 00:05:11.923 + ret=0 00:05:11.923 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:11.923 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:11.923 + diff -u /tmp/62.G2O /tmp/spdk_tgt_config.json.FI0 00:05:11.923 + echo 'INFO: JSON config files are the same' 00:05:11.923 INFO: JSON config files are the same 00:05:11.923 + rm /tmp/62.G2O /tmp/spdk_tgt_config.json.FI0 00:05:11.923 + exit 0 00:05:11.923 20:54:45 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:11.923 20:54:45 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:11.923 INFO: changing configuration and checking if this can be detected... 00:05:11.923 20:54:45 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:11.923 20:54:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:12.181 20:54:45 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.181 20:54:45 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:12.181 20:54:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.181 + '[' 2 -ne 2 ']' 00:05:12.181 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:12.181 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:12.181 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:12.181 +++ basename /dev/fd/62 00:05:12.181 ++ mktemp /tmp/62.XXX 00:05:12.181 + tmp_file_1=/tmp/62.z6r 00:05:12.181 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.181 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:12.181 + tmp_file_2=/tmp/spdk_tgt_config.json.3tf 00:05:12.181 + ret=0 00:05:12.181 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.748 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:12.748 + diff -u /tmp/62.z6r /tmp/spdk_tgt_config.json.3tf 00:05:12.748 + ret=1 00:05:12.748 + echo '=== Start of file: /tmp/62.z6r ===' 00:05:12.748 + cat /tmp/62.z6r 00:05:12.748 + echo '=== End of file: /tmp/62.z6r ===' 00:05:12.748 + echo '' 00:05:12.748 + echo '=== Start of file: /tmp/spdk_tgt_config.json.3tf ===' 00:05:12.748 + cat /tmp/spdk_tgt_config.json.3tf 00:05:12.748 + echo '=== End of file: /tmp/spdk_tgt_config.json.3tf ===' 00:05:12.748 + echo '' 00:05:12.748 + rm /tmp/62.z6r /tmp/spdk_tgt_config.json.3tf 00:05:12.748 + exit 1 00:05:12.748 20:54:46 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:12.748 INFO: configuration change detected. 00:05:12.748 20:54:46 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:12.748 20:54:46 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:12.748 20:54:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.748 20:54:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.748 20:54:46 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:12.748 20:54:46 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:12.748 20:54:46 json_config -- json_config/json_config.sh@324 -- # [[ -n 2850185 ]] 00:05:12.748 20:54:46 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:12.748 20:54:46 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:12.748 20:54:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.748 20:54:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.748 20:54:46 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:12.748 20:54:46 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:12.748 20:54:46 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:12.748 20:54:46 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:12.748 20:54:46 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:12.748 20:54:46 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:12.748 20:54:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.748 20:54:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.748 20:54:46 json_config -- json_config/json_config.sh@330 -- # killprocess 2850185 00:05:12.748 20:54:46 json_config -- common/autotest_common.sh@954 -- # '[' -z 2850185 ']' 00:05:12.748 20:54:46 json_config -- common/autotest_common.sh@958 -- # kill -0 2850185 00:05:12.748 20:54:46 json_config -- common/autotest_common.sh@959 -- # uname 00:05:12.748 20:54:46 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.748 20:54:46 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2850185 00:05:12.748 20:54:46 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.748 20:54:46 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.748 20:54:46 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2850185' 00:05:12.748 killing process with pid 2850185 00:05:12.748 20:54:46 json_config -- common/autotest_common.sh@973 -- # kill 2850185 00:05:12.748 20:54:46 json_config -- common/autotest_common.sh@978 -- # wait 2850185 00:05:15.279 20:54:48 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.279 20:54:48 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:15.279 20:54:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:15.279 20:54:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.279 20:54:48 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:15.279 20:54:48 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:15.279 INFO: Success 00:05:15.279 00:05:15.279 real 0m19.638s 00:05:15.279 user 0m21.322s 00:05:15.279 sys 0m3.092s 00:05:15.279 20:54:48 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.279 20:54:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.279 ************************************ 00:05:15.279 END TEST json_config 00:05:15.279 ************************************ 00:05:15.279 20:54:48 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:15.279 20:54:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.279 20:54:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.279 20:54:48 -- common/autotest_common.sh@10 -- # set +x 00:05:15.279 ************************************ 00:05:15.279 START TEST json_config_extra_key 00:05:15.279 ************************************ 00:05:15.279 20:54:48 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:15.279 20:54:48 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:15.279 20:54:48 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:15.279 20:54:48 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:15.279 20:54:49 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:15.279 20:54:49 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.279 20:54:49 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:15.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.279 --rc genhtml_branch_coverage=1 00:05:15.279 --rc genhtml_function_coverage=1 00:05:15.279 --rc genhtml_legend=1 00:05:15.279 --rc geninfo_all_blocks=1 00:05:15.279 --rc geninfo_unexecuted_blocks=1 00:05:15.279 00:05:15.279 ' 00:05:15.279 20:54:49 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:15.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.279 --rc genhtml_branch_coverage=1 00:05:15.279 --rc genhtml_function_coverage=1 00:05:15.279 --rc genhtml_legend=1 00:05:15.279 --rc geninfo_all_blocks=1 00:05:15.279 --rc geninfo_unexecuted_blocks=1 00:05:15.279 00:05:15.279 ' 00:05:15.279 20:54:49 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:15.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.279 --rc genhtml_branch_coverage=1 00:05:15.279 --rc genhtml_function_coverage=1 00:05:15.279 --rc genhtml_legend=1 00:05:15.279 --rc geninfo_all_blocks=1 00:05:15.279 --rc geninfo_unexecuted_blocks=1 00:05:15.279 00:05:15.279 ' 00:05:15.279 20:54:49 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:15.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.279 --rc genhtml_branch_coverage=1 00:05:15.279 --rc genhtml_function_coverage=1 00:05:15.279 --rc genhtml_legend=1 00:05:15.279 --rc geninfo_all_blocks=1 00:05:15.279 --rc geninfo_unexecuted_blocks=1 00:05:15.279 00:05:15.279 ' 00:05:15.279 20:54:49 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:15.279 20:54:49 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:15.279 20:54:49 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:15.279 20:54:49 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:15.279 20:54:49 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:15.279 20:54:49 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:15.279 20:54:49 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:15.279 20:54:49 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:15.279 20:54:49 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:15.279 20:54:49 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:15.279 20:54:49 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:15.279 20:54:49 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:15.279 20:54:49 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:15.279 20:54:49 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:15.279 20:54:49 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:15.279 20:54:49 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:15.279 20:54:49 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:15.279 20:54:49 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:15.279 20:54:49 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:15.279 20:54:49 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:15.279 20:54:49 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.279 20:54:49 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.280 20:54:49 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.280 20:54:49 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:15.280 20:54:49 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.280 20:54:49 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:15.280 20:54:49 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:15.280 20:54:49 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:15.280 20:54:49 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:15.280 20:54:49 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:15.280 20:54:49 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:15.280 20:54:49 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:15.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:15.280 20:54:49 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:15.280 20:54:49 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:15.280 20:54:49 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:15.280 20:54:49 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:15.280 20:54:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:15.280 20:54:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:15.280 20:54:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:15.280 20:54:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:15.280 20:54:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:15.280 20:54:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:15.280 20:54:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:15.280 20:54:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:15.280 20:54:49 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:15.280 20:54:49 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:15.280 INFO: launching applications... 00:05:15.280 20:54:49 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:15.280 20:54:49 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:15.280 20:54:49 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:15.280 20:54:49 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:15.280 20:54:49 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:15.280 20:54:49 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:15.280 20:54:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.280 20:54:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.280 20:54:49 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2851365 00:05:15.280 20:54:49 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:15.280 20:54:49 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:15.280 Waiting for target to run... 00:05:15.280 20:54:49 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2851365 /var/tmp/spdk_tgt.sock 00:05:15.280 20:54:49 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2851365 ']' 00:05:15.280 20:54:49 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:15.280 20:54:49 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.280 20:54:49 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:15.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:15.280 20:54:49 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.280 20:54:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:15.538 [2024-11-19 20:54:49.160171] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:15.538 [2024-11-19 20:54:49.160326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851365 ] 00:05:16.105 [2024-11-19 20:54:49.740706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.105 [2024-11-19 20:54:49.874415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.040 20:54:50 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.040 20:54:50 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:17.040 20:54:50 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:17.040 00:05:17.040 20:54:50 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:17.040 INFO: shutting down applications... 00:05:17.040 20:54:50 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:17.040 20:54:50 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:17.040 20:54:50 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:17.040 20:54:50 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2851365 ]] 00:05:17.040 20:54:50 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2851365 00:05:17.040 20:54:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:17.040 20:54:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.040 20:54:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2851365 00:05:17.040 20:54:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.607 20:54:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.607 20:54:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.607 20:54:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2851365 00:05:17.607 20:54:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.866 20:54:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.866 20:54:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.866 20:54:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2851365 00:05:17.866 20:54:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:18.432 20:54:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:18.432 20:54:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.432 20:54:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2851365 00:05:18.432 20:54:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:18.999 20:54:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:18.999 20:54:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.999 20:54:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2851365 00:05:18.999 20:54:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:19.566 20:54:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:19.566 20:54:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.566 20:54:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2851365 00:05:19.566 20:54:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.135 20:54:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.135 20:54:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.135 20:54:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2851365 00:05:20.135 20:54:53 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:20.135 20:54:53 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:20.135 20:54:53 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:20.135 20:54:53 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:20.135 SPDK target shutdown done 00:05:20.135 20:54:53 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:20.135 Success 00:05:20.135 00:05:20.135 real 0m4.740s 00:05:20.135 user 0m4.302s 00:05:20.135 sys 0m0.843s 00:05:20.135 20:54:53 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.135 20:54:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:20.135 ************************************ 00:05:20.135 END TEST json_config_extra_key 00:05:20.135 ************************************ 00:05:20.135 20:54:53 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.135 20:54:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.135 20:54:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.135 20:54:53 -- common/autotest_common.sh@10 -- # set +x 00:05:20.135 ************************************ 00:05:20.135 START TEST alias_rpc 00:05:20.135 ************************************ 00:05:20.135 20:54:53 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.135 * Looking for test storage... 00:05:20.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:20.135 20:54:53 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:20.135 20:54:53 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:20.135 20:54:53 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:20.135 20:54:53 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.135 20:54:53 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:20.136 20:54:53 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.136 20:54:53 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:20.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.136 --rc genhtml_branch_coverage=1 00:05:20.136 --rc genhtml_function_coverage=1 00:05:20.136 --rc genhtml_legend=1 00:05:20.136 --rc geninfo_all_blocks=1 00:05:20.136 --rc geninfo_unexecuted_blocks=1 00:05:20.136 00:05:20.136 ' 00:05:20.136 20:54:53 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:20.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.136 --rc genhtml_branch_coverage=1 00:05:20.136 --rc genhtml_function_coverage=1 00:05:20.136 --rc genhtml_legend=1 00:05:20.136 --rc geninfo_all_blocks=1 00:05:20.136 --rc geninfo_unexecuted_blocks=1 00:05:20.136 00:05:20.136 ' 00:05:20.136 20:54:53 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:20.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.136 --rc genhtml_branch_coverage=1 00:05:20.136 --rc genhtml_function_coverage=1 00:05:20.136 --rc genhtml_legend=1 00:05:20.136 --rc geninfo_all_blocks=1 00:05:20.136 --rc geninfo_unexecuted_blocks=1 00:05:20.136 00:05:20.136 ' 00:05:20.136 20:54:53 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:20.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.136 --rc genhtml_branch_coverage=1 00:05:20.136 --rc genhtml_function_coverage=1 00:05:20.136 --rc genhtml_legend=1 00:05:20.136 --rc geninfo_all_blocks=1 00:05:20.136 --rc geninfo_unexecuted_blocks=1 00:05:20.136 00:05:20.136 ' 00:05:20.136 20:54:53 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:20.136 20:54:53 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2851963 00:05:20.136 20:54:53 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.136 20:54:53 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2851963 00:05:20.136 20:54:53 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2851963 ']' 00:05:20.136 20:54:53 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.136 20:54:53 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.136 20:54:53 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.136 20:54:53 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.136 20:54:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.394 [2024-11-19 20:54:53.953815] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:20.394 [2024-11-19 20:54:53.953967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851963 ] 00:05:20.394 [2024-11-19 20:54:54.098003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.652 [2024-11-19 20:54:54.236612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.587 20:54:55 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.587 20:54:55 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:21.587 20:54:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:21.845 20:54:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2851963 00:05:21.845 20:54:55 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2851963 ']' 00:05:21.845 20:54:55 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2851963 00:05:21.845 20:54:55 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:21.845 20:54:55 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.845 20:54:55 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2851963 00:05:21.845 20:54:55 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.845 20:54:55 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.845 20:54:55 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2851963' 00:05:21.845 killing process with pid 2851963 00:05:21.845 20:54:55 alias_rpc -- common/autotest_common.sh@973 -- # kill 2851963 00:05:21.845 20:54:55 alias_rpc -- common/autotest_common.sh@978 -- # wait 2851963 00:05:24.374 00:05:24.374 real 0m4.272s 00:05:24.374 user 0m4.429s 00:05:24.374 sys 0m0.655s 00:05:24.374 20:54:57 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.374 20:54:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.374 ************************************ 00:05:24.374 END TEST alias_rpc 00:05:24.374 ************************************ 00:05:24.374 20:54:58 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:24.374 20:54:58 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:24.374 20:54:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.374 20:54:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.374 20:54:58 -- common/autotest_common.sh@10 -- # set +x 00:05:24.374 ************************************ 00:05:24.374 START TEST spdkcli_tcp 00:05:24.374 ************************************ 00:05:24.374 20:54:58 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:24.374 * Looking for test storage... 00:05:24.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:24.374 20:54:58 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:24.374 20:54:58 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:24.374 20:54:58 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.632 20:54:58 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.632 20:54:58 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:24.632 20:54:58 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.632 20:54:58 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:24.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.632 --rc genhtml_branch_coverage=1 00:05:24.632 --rc genhtml_function_coverage=1 00:05:24.632 --rc genhtml_legend=1 00:05:24.632 --rc geninfo_all_blocks=1 00:05:24.632 --rc geninfo_unexecuted_blocks=1 00:05:24.632 00:05:24.632 ' 00:05:24.632 20:54:58 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:24.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.633 --rc genhtml_branch_coverage=1 00:05:24.633 --rc genhtml_function_coverage=1 00:05:24.633 --rc genhtml_legend=1 00:05:24.633 --rc geninfo_all_blocks=1 00:05:24.633 --rc geninfo_unexecuted_blocks=1 00:05:24.633 00:05:24.633 ' 00:05:24.633 20:54:58 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:24.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.633 --rc genhtml_branch_coverage=1 00:05:24.633 --rc genhtml_function_coverage=1 00:05:24.633 --rc genhtml_legend=1 00:05:24.633 --rc geninfo_all_blocks=1 00:05:24.633 --rc geninfo_unexecuted_blocks=1 00:05:24.633 00:05:24.633 ' 00:05:24.633 20:54:58 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:24.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.633 --rc genhtml_branch_coverage=1 00:05:24.633 --rc genhtml_function_coverage=1 00:05:24.633 --rc genhtml_legend=1 00:05:24.633 --rc geninfo_all_blocks=1 00:05:24.633 --rc geninfo_unexecuted_blocks=1 00:05:24.633 00:05:24.633 ' 00:05:24.633 20:54:58 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:24.633 20:54:58 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:24.633 20:54:58 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:24.633 20:54:58 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:24.633 20:54:58 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:24.633 20:54:58 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:24.633 20:54:58 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:24.633 20:54:58 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.633 20:54:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.633 20:54:58 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2852556 00:05:24.633 20:54:58 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:24.633 20:54:58 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2852556 00:05:24.633 20:54:58 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2852556 ']' 00:05:24.633 20:54:58 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.633 20:54:58 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.633 20:54:58 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.633 20:54:58 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.633 20:54:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.633 [2024-11-19 20:54:58.283870] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:24.633 [2024-11-19 20:54:58.284018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852556 ] 00:05:24.633 [2024-11-19 20:54:58.418226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.892 [2024-11-19 20:54:58.551093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.892 [2024-11-19 20:54:58.551099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.827 20:54:59 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.827 20:54:59 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:25.827 20:54:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2852697 00:05:25.827 20:54:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:25.827 20:54:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:26.085 [ 00:05:26.085 "bdev_malloc_delete", 00:05:26.085 "bdev_malloc_create", 00:05:26.085 "bdev_null_resize", 00:05:26.085 "bdev_null_delete", 00:05:26.085 "bdev_null_create", 00:05:26.085 "bdev_nvme_cuse_unregister", 00:05:26.085 "bdev_nvme_cuse_register", 00:05:26.085 "bdev_opal_new_user", 00:05:26.085 "bdev_opal_set_lock_state", 00:05:26.085 "bdev_opal_delete", 00:05:26.085 "bdev_opal_get_info", 00:05:26.085 "bdev_opal_create", 00:05:26.085 "bdev_nvme_opal_revert", 00:05:26.085 "bdev_nvme_opal_init", 00:05:26.085 "bdev_nvme_send_cmd", 00:05:26.085 "bdev_nvme_set_keys", 00:05:26.085 "bdev_nvme_get_path_iostat", 00:05:26.085 "bdev_nvme_get_mdns_discovery_info", 00:05:26.085 "bdev_nvme_stop_mdns_discovery", 00:05:26.085 "bdev_nvme_start_mdns_discovery", 00:05:26.085 "bdev_nvme_set_multipath_policy", 00:05:26.085 "bdev_nvme_set_preferred_path", 00:05:26.085 "bdev_nvme_get_io_paths", 00:05:26.085 "bdev_nvme_remove_error_injection", 00:05:26.085 "bdev_nvme_add_error_injection", 00:05:26.085 "bdev_nvme_get_discovery_info", 00:05:26.085 "bdev_nvme_stop_discovery", 00:05:26.085 "bdev_nvme_start_discovery", 00:05:26.085 "bdev_nvme_get_controller_health_info", 00:05:26.085 "bdev_nvme_disable_controller", 00:05:26.085 "bdev_nvme_enable_controller", 00:05:26.085 "bdev_nvme_reset_controller", 00:05:26.086 "bdev_nvme_get_transport_statistics", 00:05:26.086 "bdev_nvme_apply_firmware", 00:05:26.086 "bdev_nvme_detach_controller", 00:05:26.086 "bdev_nvme_get_controllers", 00:05:26.086 "bdev_nvme_attach_controller", 00:05:26.086 "bdev_nvme_set_hotplug", 00:05:26.086 "bdev_nvme_set_options", 00:05:26.086 "bdev_passthru_delete", 00:05:26.086 "bdev_passthru_create", 00:05:26.086 "bdev_lvol_set_parent_bdev", 00:05:26.086 "bdev_lvol_set_parent", 00:05:26.086 "bdev_lvol_check_shallow_copy", 00:05:26.086 "bdev_lvol_start_shallow_copy", 00:05:26.086 "bdev_lvol_grow_lvstore", 00:05:26.086 "bdev_lvol_get_lvols", 00:05:26.086 "bdev_lvol_get_lvstores", 00:05:26.086 "bdev_lvol_delete", 00:05:26.086 "bdev_lvol_set_read_only", 00:05:26.086 "bdev_lvol_resize", 00:05:26.086 "bdev_lvol_decouple_parent", 00:05:26.086 "bdev_lvol_inflate", 00:05:26.086 "bdev_lvol_rename", 00:05:26.086 "bdev_lvol_clone_bdev", 00:05:26.086 "bdev_lvol_clone", 00:05:26.086 "bdev_lvol_snapshot", 00:05:26.086 "bdev_lvol_create", 00:05:26.086 "bdev_lvol_delete_lvstore", 00:05:26.086 "bdev_lvol_rename_lvstore", 00:05:26.086 "bdev_lvol_create_lvstore", 00:05:26.086 "bdev_raid_set_options", 00:05:26.086 "bdev_raid_remove_base_bdev", 00:05:26.086 "bdev_raid_add_base_bdev", 00:05:26.086 "bdev_raid_delete", 00:05:26.086 "bdev_raid_create", 00:05:26.086 "bdev_raid_get_bdevs", 00:05:26.086 "bdev_error_inject_error", 00:05:26.086 "bdev_error_delete", 00:05:26.086 "bdev_error_create", 00:05:26.086 "bdev_split_delete", 00:05:26.086 "bdev_split_create", 00:05:26.086 "bdev_delay_delete", 00:05:26.086 "bdev_delay_create", 00:05:26.086 "bdev_delay_update_latency", 00:05:26.086 "bdev_zone_block_delete", 00:05:26.086 "bdev_zone_block_create", 00:05:26.086 "blobfs_create", 00:05:26.086 "blobfs_detect", 00:05:26.086 "blobfs_set_cache_size", 00:05:26.086 "bdev_aio_delete", 00:05:26.086 "bdev_aio_rescan", 00:05:26.086 "bdev_aio_create", 00:05:26.086 "bdev_ftl_set_property", 00:05:26.086 "bdev_ftl_get_properties", 00:05:26.086 "bdev_ftl_get_stats", 00:05:26.086 "bdev_ftl_unmap", 00:05:26.086 "bdev_ftl_unload", 00:05:26.086 "bdev_ftl_delete", 00:05:26.086 "bdev_ftl_load", 00:05:26.086 "bdev_ftl_create", 00:05:26.086 "bdev_virtio_attach_controller", 00:05:26.086 "bdev_virtio_scsi_get_devices", 00:05:26.086 "bdev_virtio_detach_controller", 00:05:26.086 "bdev_virtio_blk_set_hotplug", 00:05:26.086 "bdev_iscsi_delete", 00:05:26.086 "bdev_iscsi_create", 00:05:26.086 "bdev_iscsi_set_options", 00:05:26.086 "accel_error_inject_error", 00:05:26.086 "ioat_scan_accel_module", 00:05:26.086 "dsa_scan_accel_module", 00:05:26.086 "iaa_scan_accel_module", 00:05:26.086 "keyring_file_remove_key", 00:05:26.086 "keyring_file_add_key", 00:05:26.086 "keyring_linux_set_options", 00:05:26.086 "fsdev_aio_delete", 00:05:26.086 "fsdev_aio_create", 00:05:26.086 "iscsi_get_histogram", 00:05:26.086 "iscsi_enable_histogram", 00:05:26.086 "iscsi_set_options", 00:05:26.086 "iscsi_get_auth_groups", 00:05:26.086 "iscsi_auth_group_remove_secret", 00:05:26.086 "iscsi_auth_group_add_secret", 00:05:26.086 "iscsi_delete_auth_group", 00:05:26.086 "iscsi_create_auth_group", 00:05:26.086 "iscsi_set_discovery_auth", 00:05:26.086 "iscsi_get_options", 00:05:26.086 "iscsi_target_node_request_logout", 00:05:26.086 "iscsi_target_node_set_redirect", 00:05:26.086 "iscsi_target_node_set_auth", 00:05:26.086 "iscsi_target_node_add_lun", 00:05:26.086 "iscsi_get_stats", 00:05:26.086 "iscsi_get_connections", 00:05:26.086 "iscsi_portal_group_set_auth", 00:05:26.086 "iscsi_start_portal_group", 00:05:26.086 "iscsi_delete_portal_group", 00:05:26.086 "iscsi_create_portal_group", 00:05:26.086 "iscsi_get_portal_groups", 00:05:26.086 "iscsi_delete_target_node", 00:05:26.086 "iscsi_target_node_remove_pg_ig_maps", 00:05:26.086 "iscsi_target_node_add_pg_ig_maps", 00:05:26.086 "iscsi_create_target_node", 00:05:26.086 "iscsi_get_target_nodes", 00:05:26.086 "iscsi_delete_initiator_group", 00:05:26.086 "iscsi_initiator_group_remove_initiators", 00:05:26.086 "iscsi_initiator_group_add_initiators", 00:05:26.086 "iscsi_create_initiator_group", 00:05:26.086 "iscsi_get_initiator_groups", 00:05:26.086 "nvmf_set_crdt", 00:05:26.086 "nvmf_set_config", 00:05:26.086 "nvmf_set_max_subsystems", 00:05:26.086 "nvmf_stop_mdns_prr", 00:05:26.086 "nvmf_publish_mdns_prr", 00:05:26.086 "nvmf_subsystem_get_listeners", 00:05:26.086 "nvmf_subsystem_get_qpairs", 00:05:26.086 "nvmf_subsystem_get_controllers", 00:05:26.086 "nvmf_get_stats", 00:05:26.086 "nvmf_get_transports", 00:05:26.086 "nvmf_create_transport", 00:05:26.086 "nvmf_get_targets", 00:05:26.086 "nvmf_delete_target", 00:05:26.086 "nvmf_create_target", 00:05:26.086 "nvmf_subsystem_allow_any_host", 00:05:26.086 "nvmf_subsystem_set_keys", 00:05:26.086 "nvmf_subsystem_remove_host", 00:05:26.086 "nvmf_subsystem_add_host", 00:05:26.086 "nvmf_ns_remove_host", 00:05:26.086 "nvmf_ns_add_host", 00:05:26.086 "nvmf_subsystem_remove_ns", 00:05:26.086 "nvmf_subsystem_set_ns_ana_group", 00:05:26.086 "nvmf_subsystem_add_ns", 00:05:26.086 "nvmf_subsystem_listener_set_ana_state", 00:05:26.086 "nvmf_discovery_get_referrals", 00:05:26.086 "nvmf_discovery_remove_referral", 00:05:26.086 "nvmf_discovery_add_referral", 00:05:26.086 "nvmf_subsystem_remove_listener", 00:05:26.086 "nvmf_subsystem_add_listener", 00:05:26.086 "nvmf_delete_subsystem", 00:05:26.086 "nvmf_create_subsystem", 00:05:26.086 "nvmf_get_subsystems", 00:05:26.086 "env_dpdk_get_mem_stats", 00:05:26.086 "nbd_get_disks", 00:05:26.086 "nbd_stop_disk", 00:05:26.086 "nbd_start_disk", 00:05:26.086 "ublk_recover_disk", 00:05:26.086 "ublk_get_disks", 00:05:26.086 "ublk_stop_disk", 00:05:26.086 "ublk_start_disk", 00:05:26.086 "ublk_destroy_target", 00:05:26.086 "ublk_create_target", 00:05:26.086 "virtio_blk_create_transport", 00:05:26.086 "virtio_blk_get_transports", 00:05:26.086 "vhost_controller_set_coalescing", 00:05:26.086 "vhost_get_controllers", 00:05:26.086 "vhost_delete_controller", 00:05:26.086 "vhost_create_blk_controller", 00:05:26.086 "vhost_scsi_controller_remove_target", 00:05:26.086 "vhost_scsi_controller_add_target", 00:05:26.086 "vhost_start_scsi_controller", 00:05:26.086 "vhost_create_scsi_controller", 00:05:26.086 "thread_set_cpumask", 00:05:26.086 "scheduler_set_options", 00:05:26.086 "framework_get_governor", 00:05:26.086 "framework_get_scheduler", 00:05:26.086 "framework_set_scheduler", 00:05:26.086 "framework_get_reactors", 00:05:26.086 "thread_get_io_channels", 00:05:26.086 "thread_get_pollers", 00:05:26.086 "thread_get_stats", 00:05:26.086 "framework_monitor_context_switch", 00:05:26.086 "spdk_kill_instance", 00:05:26.086 "log_enable_timestamps", 00:05:26.086 "log_get_flags", 00:05:26.086 "log_clear_flag", 00:05:26.086 "log_set_flag", 00:05:26.086 "log_get_level", 00:05:26.086 "log_set_level", 00:05:26.086 "log_get_print_level", 00:05:26.086 "log_set_print_level", 00:05:26.086 "framework_enable_cpumask_locks", 00:05:26.086 "framework_disable_cpumask_locks", 00:05:26.086 "framework_wait_init", 00:05:26.086 "framework_start_init", 00:05:26.086 "scsi_get_devices", 00:05:26.086 "bdev_get_histogram", 00:05:26.086 "bdev_enable_histogram", 00:05:26.086 "bdev_set_qos_limit", 00:05:26.086 "bdev_set_qd_sampling_period", 00:05:26.086 "bdev_get_bdevs", 00:05:26.086 "bdev_reset_iostat", 00:05:26.086 "bdev_get_iostat", 00:05:26.086 "bdev_examine", 00:05:26.086 "bdev_wait_for_examine", 00:05:26.086 "bdev_set_options", 00:05:26.086 "accel_get_stats", 00:05:26.086 "accel_set_options", 00:05:26.086 "accel_set_driver", 00:05:26.086 "accel_crypto_key_destroy", 00:05:26.086 "accel_crypto_keys_get", 00:05:26.086 "accel_crypto_key_create", 00:05:26.086 "accel_assign_opc", 00:05:26.086 "accel_get_module_info", 00:05:26.086 "accel_get_opc_assignments", 00:05:26.086 "vmd_rescan", 00:05:26.086 "vmd_remove_device", 00:05:26.086 "vmd_enable", 00:05:26.086 "sock_get_default_impl", 00:05:26.086 "sock_set_default_impl", 00:05:26.086 "sock_impl_set_options", 00:05:26.086 "sock_impl_get_options", 00:05:26.086 "iobuf_get_stats", 00:05:26.086 "iobuf_set_options", 00:05:26.086 "keyring_get_keys", 00:05:26.086 "framework_get_pci_devices", 00:05:26.086 "framework_get_config", 00:05:26.086 "framework_get_subsystems", 00:05:26.086 "fsdev_set_opts", 00:05:26.086 "fsdev_get_opts", 00:05:26.086 "trace_get_info", 00:05:26.086 "trace_get_tpoint_group_mask", 00:05:26.086 "trace_disable_tpoint_group", 00:05:26.086 "trace_enable_tpoint_group", 00:05:26.086 "trace_clear_tpoint_mask", 00:05:26.086 "trace_set_tpoint_mask", 00:05:26.086 "notify_get_notifications", 00:05:26.086 "notify_get_types", 00:05:26.086 "spdk_get_version", 00:05:26.086 "rpc_get_methods" 00:05:26.086 ] 00:05:26.086 20:54:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:26.086 20:54:59 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.086 20:54:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.086 20:54:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:26.086 20:54:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2852556 00:05:26.087 20:54:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2852556 ']' 00:05:26.087 20:54:59 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2852556 00:05:26.087 20:54:59 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:26.087 20:54:59 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.087 20:54:59 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2852556 00:05:26.087 20:54:59 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.087 20:54:59 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.087 20:54:59 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2852556' 00:05:26.087 killing process with pid 2852556 00:05:26.087 20:54:59 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2852556 00:05:26.087 20:54:59 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2852556 00:05:28.672 00:05:28.672 real 0m4.163s 00:05:28.672 user 0m7.649s 00:05:28.672 sys 0m0.696s 00:05:28.672 20:55:02 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.672 20:55:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.672 ************************************ 00:05:28.672 END TEST spdkcli_tcp 00:05:28.672 ************************************ 00:05:28.672 20:55:02 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:28.672 20:55:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.672 20:55:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.672 20:55:02 -- common/autotest_common.sh@10 -- # set +x 00:05:28.672 ************************************ 00:05:28.672 START TEST dpdk_mem_utility 00:05:28.672 ************************************ 00:05:28.672 20:55:02 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:28.672 * Looking for test storage... 00:05:28.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:28.672 20:55:02 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:28.672 20:55:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:28.672 20:55:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:28.672 20:55:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.672 20:55:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:28.672 20:55:02 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.672 20:55:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:28.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.672 --rc genhtml_branch_coverage=1 00:05:28.672 --rc genhtml_function_coverage=1 00:05:28.672 --rc genhtml_legend=1 00:05:28.672 --rc geninfo_all_blocks=1 00:05:28.672 --rc geninfo_unexecuted_blocks=1 00:05:28.672 00:05:28.672 ' 00:05:28.672 20:55:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:28.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.672 --rc genhtml_branch_coverage=1 00:05:28.672 --rc genhtml_function_coverage=1 00:05:28.672 --rc genhtml_legend=1 00:05:28.672 --rc geninfo_all_blocks=1 00:05:28.672 --rc geninfo_unexecuted_blocks=1 00:05:28.672 00:05:28.672 ' 00:05:28.672 20:55:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:28.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.672 --rc genhtml_branch_coverage=1 00:05:28.672 --rc genhtml_function_coverage=1 00:05:28.672 --rc genhtml_legend=1 00:05:28.672 --rc geninfo_all_blocks=1 00:05:28.672 --rc geninfo_unexecuted_blocks=1 00:05:28.672 00:05:28.672 ' 00:05:28.672 20:55:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:28.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.672 --rc genhtml_branch_coverage=1 00:05:28.672 --rc genhtml_function_coverage=1 00:05:28.672 --rc genhtml_legend=1 00:05:28.672 --rc geninfo_all_blocks=1 00:05:28.672 --rc geninfo_unexecuted_blocks=1 00:05:28.672 00:05:28.672 ' 00:05:28.672 20:55:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:28.672 20:55:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2853167 00:05:28.672 20:55:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.672 20:55:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2853167 00:05:28.672 20:55:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2853167 ']' 00:05:28.672 20:55:02 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.672 20:55:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.672 20:55:02 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.672 20:55:02 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.672 20:55:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.930 [2024-11-19 20:55:02.484577] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:28.930 [2024-11-19 20:55:02.484730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2853167 ] 00:05:28.930 [2024-11-19 20:55:02.627480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.188 [2024-11-19 20:55:02.764699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.122 20:55:03 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.122 20:55:03 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:30.122 20:55:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:30.122 20:55:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:30.122 20:55:03 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.122 20:55:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:30.122 { 00:05:30.122 "filename": "/tmp/spdk_mem_dump.txt" 00:05:30.122 } 00:05:30.122 20:55:03 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.122 20:55:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:30.122 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:30.122 1 heaps totaling size 816.000000 MiB 00:05:30.122 size: 816.000000 MiB heap id: 0 00:05:30.122 end heaps---------- 00:05:30.122 9 mempools totaling size 595.772034 MiB 00:05:30.122 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:30.122 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:30.122 size: 92.545471 MiB name: bdev_io_2853167 00:05:30.122 size: 50.003479 MiB name: msgpool_2853167 00:05:30.122 size: 36.509338 MiB name: fsdev_io_2853167 00:05:30.122 size: 21.763794 MiB name: PDU_Pool 00:05:30.122 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:30.122 size: 4.133484 MiB name: evtpool_2853167 00:05:30.122 size: 0.026123 MiB name: Session_Pool 00:05:30.122 end mempools------- 00:05:30.122 6 memzones totaling size 4.142822 MiB 00:05:30.122 size: 1.000366 MiB name: RG_ring_0_2853167 00:05:30.122 size: 1.000366 MiB name: RG_ring_1_2853167 00:05:30.122 size: 1.000366 MiB name: RG_ring_4_2853167 00:05:30.122 size: 1.000366 MiB name: RG_ring_5_2853167 00:05:30.122 size: 0.125366 MiB name: RG_ring_2_2853167 00:05:30.122 size: 0.015991 MiB name: RG_ring_3_2853167 00:05:30.122 end memzones------- 00:05:30.122 20:55:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:30.122 heap id: 0 total size: 816.000000 MiB number of busy elements: 44 number of free elements: 19 00:05:30.122 list of free elements. size: 16.857605 MiB 00:05:30.122 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:30.122 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:30.122 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:30.122 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:30.122 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:30.122 element at address: 0x200019200000 with size: 0.999329 MiB 00:05:30.122 element at address: 0x200000400000 with size: 0.998108 MiB 00:05:30.122 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:30.123 element at address: 0x200018a00000 with size: 0.959900 MiB 00:05:30.123 element at address: 0x200019500040 with size: 0.937256 MiB 00:05:30.123 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:30.123 element at address: 0x20001ac00000 with size: 0.583191 MiB 00:05:30.123 element at address: 0x200000c00000 with size: 0.495300 MiB 00:05:30.123 element at address: 0x200018e00000 with size: 0.491150 MiB 00:05:30.123 element at address: 0x200019600000 with size: 0.485657 MiB 00:05:30.123 element at address: 0x200012c00000 with size: 0.446167 MiB 00:05:30.123 element at address: 0x200028000000 with size: 0.411072 MiB 00:05:30.123 element at address: 0x200000800000 with size: 0.355286 MiB 00:05:30.123 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:05:30.123 list of standard malloc elements. size: 199.221497 MiB 00:05:30.123 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:30.123 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:30.123 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:30.123 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:30.123 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:30.123 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:30.123 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:30.123 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:30.123 element at address: 0x200012bff040 with size: 0.000427 MiB 00:05:30.123 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:05:30.123 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:30.123 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:30.123 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:30.123 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:30.123 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:05:30.123 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:30.123 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:30.123 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:30.123 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:30.123 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:30.123 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:30.123 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:30.123 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:30.123 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:05:30.123 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:05:30.123 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:05:30.123 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:05:30.123 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:05:30.123 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:05:30.123 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:30.123 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:30.123 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:30.123 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:30.123 element at address: 0x200012bff200 with size: 0.000244 MiB 00:05:30.123 element at address: 0x200012bff300 with size: 0.000244 MiB 00:05:30.123 element at address: 0x200012bff400 with size: 0.000244 MiB 00:05:30.123 element at address: 0x200012bff500 with size: 0.000244 MiB 00:05:30.123 element at address: 0x200012bff600 with size: 0.000244 MiB 00:05:30.123 element at address: 0x200012bff700 with size: 0.000244 MiB 00:05:30.123 element at address: 0x200012bff800 with size: 0.000244 MiB 00:05:30.123 element at address: 0x200012bff900 with size: 0.000244 MiB 00:05:30.123 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:30.123 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:30.123 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:30.123 list of memzone associated elements. size: 599.920898 MiB 00:05:30.123 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:30.123 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:30.123 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:30.123 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:30.123 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:30.123 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2853167_0 00:05:30.123 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:30.123 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2853167_0 00:05:30.123 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:30.123 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2853167_0 00:05:30.123 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:30.123 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:30.123 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:30.123 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:30.123 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:30.123 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2853167_0 00:05:30.123 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:30.123 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2853167 00:05:30.123 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:30.123 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2853167 00:05:30.123 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:30.123 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:30.123 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:30.123 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:30.123 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:30.123 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:30.123 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:30.123 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:30.123 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:30.123 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2853167 00:05:30.123 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:30.123 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2853167 00:05:30.123 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:30.123 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2853167 00:05:30.123 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:30.123 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2853167 00:05:30.123 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:30.123 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2853167 00:05:30.123 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:30.123 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2853167 00:05:30.123 element at address: 0x200018e7dbc0 with size: 0.500549 MiB 00:05:30.123 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:30.123 element at address: 0x200012c72380 with size: 0.500549 MiB 00:05:30.123 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:30.123 element at address: 0x20001967c540 with size: 0.250549 MiB 00:05:30.123 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:30.123 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:30.123 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2853167 00:05:30.123 element at address: 0x20000085f180 with size: 0.125549 MiB 00:05:30.123 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2853167 00:05:30.123 element at address: 0x200018af5bc0 with size: 0.031799 MiB 00:05:30.123 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:30.123 element at address: 0x2000280693c0 with size: 0.023804 MiB 00:05:30.123 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:30.123 element at address: 0x20000085af40 with size: 0.016174 MiB 00:05:30.123 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2853167 00:05:30.123 element at address: 0x20002806f540 with size: 0.002502 MiB 00:05:30.123 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:30.123 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:05:30.123 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2853167 00:05:30.123 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:30.123 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2853167 00:05:30.123 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:30.123 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2853167 00:05:30.123 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:05:30.123 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:30.123 20:55:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:30.123 20:55:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2853167 00:05:30.123 20:55:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2853167 ']' 00:05:30.123 20:55:03 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2853167 00:05:30.124 20:55:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:30.124 20:55:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.124 20:55:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2853167 00:05:30.124 20:55:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.124 20:55:03 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.124 20:55:03 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2853167' 00:05:30.124 killing process with pid 2853167 00:05:30.124 20:55:03 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2853167 00:05:30.124 20:55:03 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2853167 00:05:32.651 00:05:32.651 real 0m4.069s 00:05:32.651 user 0m4.062s 00:05:32.651 sys 0m0.669s 00:05:32.651 20:55:06 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.651 20:55:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:32.651 ************************************ 00:05:32.651 END TEST dpdk_mem_utility 00:05:32.651 ************************************ 00:05:32.651 20:55:06 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:32.651 20:55:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.651 20:55:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.651 20:55:06 -- common/autotest_common.sh@10 -- # set +x 00:05:32.651 ************************************ 00:05:32.651 START TEST event 00:05:32.651 ************************************ 00:05:32.651 20:55:06 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:32.651 * Looking for test storage... 00:05:32.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:32.651 20:55:06 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:32.651 20:55:06 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:32.651 20:55:06 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:32.907 20:55:06 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:32.907 20:55:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.907 20:55:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.907 20:55:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.907 20:55:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.907 20:55:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.907 20:55:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.907 20:55:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.907 20:55:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.907 20:55:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.907 20:55:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.907 20:55:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.907 20:55:06 event -- scripts/common.sh@344 -- # case "$op" in 00:05:32.907 20:55:06 event -- scripts/common.sh@345 -- # : 1 00:05:32.907 20:55:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.907 20:55:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.907 20:55:06 event -- scripts/common.sh@365 -- # decimal 1 00:05:32.907 20:55:06 event -- scripts/common.sh@353 -- # local d=1 00:05:32.907 20:55:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.907 20:55:06 event -- scripts/common.sh@355 -- # echo 1 00:05:32.907 20:55:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.907 20:55:06 event -- scripts/common.sh@366 -- # decimal 2 00:05:32.907 20:55:06 event -- scripts/common.sh@353 -- # local d=2 00:05:32.907 20:55:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.907 20:55:06 event -- scripts/common.sh@355 -- # echo 2 00:05:32.907 20:55:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.907 20:55:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.908 20:55:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.908 20:55:06 event -- scripts/common.sh@368 -- # return 0 00:05:32.908 20:55:06 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.908 20:55:06 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:32.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.908 --rc genhtml_branch_coverage=1 00:05:32.908 --rc genhtml_function_coverage=1 00:05:32.908 --rc genhtml_legend=1 00:05:32.908 --rc geninfo_all_blocks=1 00:05:32.908 --rc geninfo_unexecuted_blocks=1 00:05:32.908 00:05:32.908 ' 00:05:32.908 20:55:06 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:32.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.908 --rc genhtml_branch_coverage=1 00:05:32.908 --rc genhtml_function_coverage=1 00:05:32.908 --rc genhtml_legend=1 00:05:32.908 --rc geninfo_all_blocks=1 00:05:32.908 --rc geninfo_unexecuted_blocks=1 00:05:32.908 00:05:32.908 ' 00:05:32.908 20:55:06 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:32.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.908 --rc genhtml_branch_coverage=1 00:05:32.908 --rc genhtml_function_coverage=1 00:05:32.908 --rc genhtml_legend=1 00:05:32.908 --rc geninfo_all_blocks=1 00:05:32.908 --rc geninfo_unexecuted_blocks=1 00:05:32.908 00:05:32.908 ' 00:05:32.908 20:55:06 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:32.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.908 --rc genhtml_branch_coverage=1 00:05:32.908 --rc genhtml_function_coverage=1 00:05:32.908 --rc genhtml_legend=1 00:05:32.908 --rc geninfo_all_blocks=1 00:05:32.908 --rc geninfo_unexecuted_blocks=1 00:05:32.908 00:05:32.908 ' 00:05:32.908 20:55:06 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:32.908 20:55:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:32.908 20:55:06 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:32.908 20:55:06 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:32.908 20:55:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.908 20:55:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.908 ************************************ 00:05:32.908 START TEST event_perf 00:05:32.908 ************************************ 00:05:32.908 20:55:06 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:32.908 Running I/O for 1 seconds...[2024-11-19 20:55:06.566875] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:32.908 [2024-11-19 20:55:06.567012] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2853678 ] 00:05:33.165 [2024-11-19 20:55:06.719925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:33.165 [2024-11-19 20:55:06.866033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.165 [2024-11-19 20:55:06.866118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.165 [2024-11-19 20:55:06.866198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.165 [2024-11-19 20:55:06.866207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.536 Running I/O for 1 seconds... 00:05:34.536 lcore 0: 219460 00:05:34.536 lcore 1: 219459 00:05:34.536 lcore 2: 219460 00:05:34.536 lcore 3: 219460 00:05:34.536 done. 00:05:34.536 00:05:34.536 real 0m1.606s 00:05:34.536 user 0m4.419s 00:05:34.536 sys 0m0.173s 00:05:34.536 20:55:08 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.536 20:55:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:34.536 ************************************ 00:05:34.536 END TEST event_perf 00:05:34.536 ************************************ 00:05:34.536 20:55:08 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:34.536 20:55:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:34.536 20:55:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.536 20:55:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.536 ************************************ 00:05:34.536 START TEST event_reactor 00:05:34.536 ************************************ 00:05:34.536 20:55:08 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:34.536 [2024-11-19 20:55:08.213967] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:34.536 [2024-11-19 20:55:08.214089] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2853926 ] 00:05:34.794 [2024-11-19 20:55:08.354747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.794 [2024-11-19 20:55:08.493036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.166 test_start 00:05:36.166 oneshot 00:05:36.166 tick 100 00:05:36.166 tick 100 00:05:36.166 tick 250 00:05:36.166 tick 100 00:05:36.166 tick 100 00:05:36.166 tick 100 00:05:36.166 tick 250 00:05:36.166 tick 500 00:05:36.166 tick 100 00:05:36.166 tick 100 00:05:36.166 tick 250 00:05:36.166 tick 100 00:05:36.166 tick 100 00:05:36.166 test_end 00:05:36.166 00:05:36.166 real 0m1.570s 00:05:36.166 user 0m1.416s 00:05:36.166 sys 0m0.146s 00:05:36.166 20:55:09 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.166 20:55:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:36.166 ************************************ 00:05:36.166 END TEST event_reactor 00:05:36.166 ************************************ 00:05:36.166 20:55:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:36.166 20:55:09 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:36.166 20:55:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.166 20:55:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.166 ************************************ 00:05:36.166 START TEST event_reactor_perf 00:05:36.166 ************************************ 00:05:36.166 20:55:09 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:36.166 [2024-11-19 20:55:09.827148] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:36.166 [2024-11-19 20:55:09.827274] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2854083 ] 00:05:36.424 [2024-11-19 20:55:09.968742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.424 [2024-11-19 20:55:10.104873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.800 test_start 00:05:37.800 test_end 00:05:37.800 Performance: 267489 events per second 00:05:37.800 00:05:37.800 real 0m1.567s 00:05:37.800 user 0m1.413s 00:05:37.800 sys 0m0.144s 00:05:37.800 20:55:11 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.800 20:55:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.800 ************************************ 00:05:37.800 END TEST event_reactor_perf 00:05:37.800 ************************************ 00:05:37.800 20:55:11 event -- event/event.sh@49 -- # uname -s 00:05:37.800 20:55:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:37.800 20:55:11 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:37.800 20:55:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.800 20:55:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.800 20:55:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.800 ************************************ 00:05:37.800 START TEST event_scheduler 00:05:37.800 ************************************ 00:05:37.800 20:55:11 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:37.800 * Looking for test storage... 00:05:37.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:37.800 20:55:11 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.800 20:55:11 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.800 20:55:11 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:37.800 20:55:11 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.800 20:55:11 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.801 20:55:11 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:37.801 20:55:11 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.801 20:55:11 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:37.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.801 --rc genhtml_branch_coverage=1 00:05:37.801 --rc genhtml_function_coverage=1 00:05:37.801 --rc genhtml_legend=1 00:05:37.801 --rc geninfo_all_blocks=1 00:05:37.801 --rc geninfo_unexecuted_blocks=1 00:05:37.801 00:05:37.801 ' 00:05:37.801 20:55:11 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:37.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.801 --rc genhtml_branch_coverage=1 00:05:37.801 --rc genhtml_function_coverage=1 00:05:37.801 --rc genhtml_legend=1 00:05:37.801 --rc geninfo_all_blocks=1 00:05:37.801 --rc geninfo_unexecuted_blocks=1 00:05:37.801 00:05:37.801 ' 00:05:37.801 20:55:11 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:37.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.801 --rc genhtml_branch_coverage=1 00:05:37.801 --rc genhtml_function_coverage=1 00:05:37.801 --rc genhtml_legend=1 00:05:37.801 --rc geninfo_all_blocks=1 00:05:37.801 --rc geninfo_unexecuted_blocks=1 00:05:37.801 00:05:37.801 ' 00:05:37.801 20:55:11 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:37.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.801 --rc genhtml_branch_coverage=1 00:05:37.801 --rc genhtml_function_coverage=1 00:05:37.801 --rc genhtml_legend=1 00:05:37.801 --rc geninfo_all_blocks=1 00:05:37.801 --rc geninfo_unexecuted_blocks=1 00:05:37.801 00:05:37.801 ' 00:05:37.801 20:55:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:37.801 20:55:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2854400 00:05:37.801 20:55:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:37.801 20:55:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.801 20:55:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2854400 00:05:37.801 20:55:11 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2854400 ']' 00:05:37.801 20:55:11 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.801 20:55:11 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.801 20:55:11 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.801 20:55:11 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.801 20:55:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.059 [2024-11-19 20:55:11.628755] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:38.059 [2024-11-19 20:55:11.628925] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2854400 ] 00:05:38.059 [2024-11-19 20:55:11.777166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:38.318 [2024-11-19 20:55:11.903466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.318 [2024-11-19 20:55:11.903523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.318 [2024-11-19 20:55:11.903580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.318 [2024-11-19 20:55:11.903589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:38.883 20:55:12 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.883 20:55:12 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:38.883 20:55:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:38.883 20:55:12 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.883 20:55:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.883 [2024-11-19 20:55:12.606654] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:38.883 [2024-11-19 20:55:12.606697] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:38.884 [2024-11-19 20:55:12.606728] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:38.884 [2024-11-19 20:55:12.606746] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:38.884 [2024-11-19 20:55:12.606775] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:38.884 20:55:12 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.884 20:55:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:38.884 20:55:12 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.884 20:55:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.142 [2024-11-19 20:55:12.913847] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:39.142 20:55:12 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.142 20:55:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:39.142 20:55:12 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.142 20:55:12 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.142 20:55:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.401 ************************************ 00:05:39.401 START TEST scheduler_create_thread 00:05:39.401 ************************************ 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.401 2 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.401 3 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.401 4 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.401 5 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.401 6 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.401 7 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.401 20:55:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.401 8 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.401 9 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.401 10 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.401 00:05:39.401 real 0m0.109s 00:05:39.401 user 0m0.010s 00:05:39.401 sys 0m0.003s 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.401 20:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.401 ************************************ 00:05:39.401 END TEST scheduler_create_thread 00:05:39.401 ************************************ 00:05:39.401 20:55:13 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:39.401 20:55:13 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2854400 00:05:39.401 20:55:13 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2854400 ']' 00:05:39.401 20:55:13 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2854400 00:05:39.401 20:55:13 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:39.401 20:55:13 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.401 20:55:13 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2854400 00:05:39.401 20:55:13 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:39.401 20:55:13 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:39.401 20:55:13 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2854400' 00:05:39.401 killing process with pid 2854400 00:05:39.401 20:55:13 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2854400 00:05:39.401 20:55:13 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2854400 00:05:39.968 [2024-11-19 20:55:13.536961] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:40.905 00:05:40.905 real 0m3.104s 00:05:40.905 user 0m5.428s 00:05:40.905 sys 0m0.496s 00:05:40.905 20:55:14 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.905 20:55:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.905 ************************************ 00:05:40.905 END TEST event_scheduler 00:05:40.905 ************************************ 00:05:40.905 20:55:14 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:40.905 20:55:14 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:40.905 20:55:14 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.905 20:55:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.905 20:55:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.905 ************************************ 00:05:40.905 START TEST app_repeat 00:05:40.905 ************************************ 00:05:40.905 20:55:14 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:40.905 20:55:14 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.905 20:55:14 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.905 20:55:14 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:40.905 20:55:14 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.905 20:55:14 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:40.905 20:55:14 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:40.905 20:55:14 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:40.905 20:55:14 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2854851 00:05:40.905 20:55:14 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:40.905 20:55:14 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.905 20:55:14 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2854851' 00:05:40.905 Process app_repeat pid: 2854851 00:05:40.905 20:55:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.905 20:55:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:40.905 spdk_app_start Round 0 00:05:40.905 20:55:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2854851 /var/tmp/spdk-nbd.sock 00:05:40.905 20:55:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2854851 ']' 00:05:40.905 20:55:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.905 20:55:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.905 20:55:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.905 20:55:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.905 20:55:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.905 [2024-11-19 20:55:14.618915] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:05:40.905 [2024-11-19 20:55:14.619076] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2854851 ] 00:05:41.164 [2024-11-19 20:55:14.763276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.164 [2024-11-19 20:55:14.902013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.164 [2024-11-19 20:55:14.902020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.099 20:55:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.099 20:55:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:42.099 20:55:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.356 Malloc0 00:05:42.356 20:55:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.614 Malloc1 00:05:42.614 20:55:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.614 20:55:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.614 20:55:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.614 20:55:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:42.614 20:55:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.614 20:55:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:42.614 20:55:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.614 20:55:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.614 20:55:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.614 20:55:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:42.614 20:55:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.614 20:55:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:42.614 20:55:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:42.614 20:55:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:42.614 20:55:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.614 20:55:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:42.873 /dev/nbd0 00:05:42.873 20:55:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:42.873 20:55:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:42.873 20:55:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:42.873 20:55:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:42.873 20:55:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:42.873 20:55:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:42.873 20:55:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:42.873 20:55:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:42.873 20:55:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:42.873 20:55:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:42.873 20:55:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.873 1+0 records in 00:05:42.873 1+0 records out 00:05:42.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182799 s, 22.4 MB/s 00:05:42.873 20:55:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.873 20:55:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:42.873 20:55:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.873 20:55:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:42.873 20:55:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:42.873 20:55:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.873 20:55:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.873 20:55:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.440 /dev/nbd1 00:05:43.440 20:55:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.440 20:55:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.440 20:55:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:43.440 20:55:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:43.440 20:55:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:43.440 20:55:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:43.440 20:55:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:43.440 20:55:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:43.440 20:55:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:43.440 20:55:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:43.440 20:55:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.440 1+0 records in 00:05:43.440 1+0 records out 00:05:43.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264042 s, 15.5 MB/s 00:05:43.440 20:55:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.440 20:55:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:43.440 20:55:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.440 20:55:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:43.440 20:55:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:43.440 20:55:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.440 20:55:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.440 20:55:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.440 20:55:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.440 20:55:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.698 20:55:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.698 { 00:05:43.698 "nbd_device": "/dev/nbd0", 00:05:43.698 "bdev_name": "Malloc0" 00:05:43.698 }, 00:05:43.698 { 00:05:43.698 "nbd_device": "/dev/nbd1", 00:05:43.698 "bdev_name": "Malloc1" 00:05:43.698 } 00:05:43.698 ]' 00:05:43.698 20:55:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.698 { 00:05:43.698 "nbd_device": "/dev/nbd0", 00:05:43.698 "bdev_name": "Malloc0" 00:05:43.698 }, 00:05:43.698 { 00:05:43.698 "nbd_device": "/dev/nbd1", 00:05:43.698 "bdev_name": "Malloc1" 00:05:43.698 } 00:05:43.698 ]' 00:05:43.698 20:55:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.698 20:55:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.698 /dev/nbd1' 00:05:43.698 20:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.698 /dev/nbd1' 00:05:43.698 20:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.698 20:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.698 20:55:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.698 20:55:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.698 20:55:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.698 20:55:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.698 20:55:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.698 20:55:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.698 20:55:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.698 20:55:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.698 20:55:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.698 20:55:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.698 256+0 records in 00:05:43.698 256+0 records out 00:05:43.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499146 s, 210 MB/s 00:05:43.698 20:55:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.699 256+0 records in 00:05:43.699 256+0 records out 00:05:43.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246047 s, 42.6 MB/s 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.699 256+0 records in 00:05:43.699 256+0 records out 00:05:43.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0294398 s, 35.6 MB/s 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.699 20:55:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:43.956 20:55:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:43.956 20:55:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:43.956 20:55:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:43.956 20:55:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.956 20:55:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.956 20:55:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:43.956 20:55:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.956 20:55:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.956 20:55:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.956 20:55:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.214 20:55:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.214 20:55:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.214 20:55:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.214 20:55:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.214 20:55:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.214 20:55:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.214 20:55:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.214 20:55:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.214 20:55:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.214 20:55:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.214 20:55:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.473 20:55:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.473 20:55:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.473 20:55:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.731 20:55:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.731 20:55:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.731 20:55:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.731 20:55:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:44.731 20:55:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.731 20:55:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.731 20:55:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.731 20:55:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.731 20:55:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.731 20:55:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.989 20:55:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:46.364 [2024-11-19 20:55:19.918948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.364 [2024-11-19 20:55:20.054190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.364 [2024-11-19 20:55:20.054194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.623 [2024-11-19 20:55:20.268318] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.623 [2024-11-19 20:55:20.268420] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.997 20:55:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.997 20:55:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:47.997 spdk_app_start Round 1 00:05:47.997 20:55:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2854851 /var/tmp/spdk-nbd.sock 00:05:47.997 20:55:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2854851 ']' 00:05:47.997 20:55:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.997 20:55:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.997 20:55:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.997 20:55:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.997 20:55:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.255 20:55:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.255 20:55:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:48.255 20:55:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.513 Malloc0 00:05:48.771 20:55:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.029 Malloc1 00:05:49.029 20:55:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.029 20:55:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.029 20:55:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.029 20:55:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.029 20:55:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.029 20:55:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.029 20:55:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.029 20:55:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.029 20:55:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.029 20:55:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.029 20:55:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.029 20:55:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.029 20:55:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:49.029 20:55:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.029 20:55:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.029 20:55:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.287 /dev/nbd0 00:05:49.287 20:55:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.288 20:55:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.288 20:55:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:49.288 20:55:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:49.288 20:55:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:49.288 20:55:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:49.288 20:55:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:49.288 20:55:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:49.288 20:55:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:49.288 20:55:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:49.288 20:55:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.288 1+0 records in 00:05:49.288 1+0 records out 00:05:49.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247522 s, 16.5 MB/s 00:05:49.288 20:55:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.288 20:55:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:49.288 20:55:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.288 20:55:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:49.288 20:55:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:49.288 20:55:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.288 20:55:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.288 20:55:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.546 /dev/nbd1 00:05:49.546 20:55:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.546 20:55:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.546 20:55:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:49.546 20:55:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:49.546 20:55:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:49.546 20:55:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:49.546 20:55:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:49.546 20:55:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:49.546 20:55:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:49.546 20:55:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:49.546 20:55:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.546 1+0 records in 00:05:49.546 1+0 records out 00:05:49.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222764 s, 18.4 MB/s 00:05:49.546 20:55:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.546 20:55:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:49.546 20:55:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.546 20:55:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:49.546 20:55:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:49.546 20:55:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.546 20:55:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.546 20:55:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.546 20:55:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.546 20:55:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.803 20:55:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.803 { 00:05:49.803 "nbd_device": "/dev/nbd0", 00:05:49.803 "bdev_name": "Malloc0" 00:05:49.803 }, 00:05:49.803 { 00:05:49.803 "nbd_device": "/dev/nbd1", 00:05:49.803 "bdev_name": "Malloc1" 00:05:49.803 } 00:05:49.803 ]' 00:05:49.803 20:55:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.803 { 00:05:49.803 "nbd_device": "/dev/nbd0", 00:05:49.803 "bdev_name": "Malloc0" 00:05:49.803 }, 00:05:49.803 { 00:05:49.803 "nbd_device": "/dev/nbd1", 00:05:49.803 "bdev_name": "Malloc1" 00:05:49.803 } 00:05:49.803 ]' 00:05:49.803 20:55:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.061 /dev/nbd1' 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.061 /dev/nbd1' 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.061 256+0 records in 00:05:50.061 256+0 records out 00:05:50.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506653 s, 207 MB/s 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.061 256+0 records in 00:05:50.061 256+0 records out 00:05:50.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243946 s, 43.0 MB/s 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.061 256+0 records in 00:05:50.061 256+0 records out 00:05:50.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292933 s, 35.8 MB/s 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.061 20:55:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.062 20:55:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.062 20:55:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:50.062 20:55:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.062 20:55:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.320 20:55:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.320 20:55:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.320 20:55:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.320 20:55:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.320 20:55:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.320 20:55:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.320 20:55:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.320 20:55:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.320 20:55:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.320 20:55:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.578 20:55:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.578 20:55:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.578 20:55:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.578 20:55:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.578 20:55:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.578 20:55:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.578 20:55:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.578 20:55:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.578 20:55:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.578 20:55:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.578 20:55:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.836 20:55:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.836 20:55:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.836 20:55:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.836 20:55:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.836 20:55:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.836 20:55:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.836 20:55:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.836 20:55:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.836 20:55:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.836 20:55:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.836 20:55:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.836 20:55:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.836 20:55:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.403 20:55:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.780 [2024-11-19 20:55:26.267761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.780 [2024-11-19 20:55:26.402507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.780 [2024-11-19 20:55:26.402509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.038 [2024-11-19 20:55:26.618257] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:53.038 [2024-11-19 20:55:26.618366] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.411 20:55:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.411 20:55:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:54.411 spdk_app_start Round 2 00:05:54.412 20:55:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2854851 /var/tmp/spdk-nbd.sock 00:05:54.412 20:55:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2854851 ']' 00:05:54.412 20:55:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.412 20:55:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.412 20:55:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.412 20:55:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.412 20:55:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.669 20:55:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.669 20:55:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:54.669 20:55:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.928 Malloc0 00:05:54.928 20:55:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.498 Malloc1 00:05:55.498 20:55:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.498 20:55:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.498 20:55:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.498 20:55:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.498 20:55:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.498 20:55:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.498 20:55:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.498 20:55:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.498 20:55:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.498 20:55:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.498 20:55:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.498 20:55:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.498 20:55:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.498 20:55:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.498 20:55:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.498 20:55:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.757 /dev/nbd0 00:05:55.757 20:55:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.757 20:55:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.757 20:55:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:55.757 20:55:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:55.758 20:55:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.758 20:55:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.758 20:55:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:55.758 20:55:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:55.758 20:55:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.758 20:55:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.758 20:55:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.758 1+0 records in 00:05:55.758 1+0 records out 00:05:55.758 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220296 s, 18.6 MB/s 00:05:55.758 20:55:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.758 20:55:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:55.758 20:55:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.758 20:55:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.758 20:55:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:55.758 20:55:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.758 20:55:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.758 20:55:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.015 /dev/nbd1 00:05:56.015 20:55:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.015 20:55:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.015 20:55:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:56.015 20:55:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:56.015 20:55:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.015 20:55:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.015 20:55:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:56.015 20:55:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:56.015 20:55:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.015 20:55:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.015 20:55:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.015 1+0 records in 00:05:56.015 1+0 records out 00:05:56.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244185 s, 16.8 MB/s 00:05:56.015 20:55:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.015 20:55:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:56.015 20:55:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.015 20:55:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.015 20:55:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:56.015 20:55:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.015 20:55:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.016 20:55:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.016 20:55:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.016 20:55:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.274 20:55:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.274 { 00:05:56.274 "nbd_device": "/dev/nbd0", 00:05:56.274 "bdev_name": "Malloc0" 00:05:56.274 }, 00:05:56.274 { 00:05:56.274 "nbd_device": "/dev/nbd1", 00:05:56.274 "bdev_name": "Malloc1" 00:05:56.274 } 00:05:56.274 ]' 00:05:56.274 20:55:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.274 { 00:05:56.274 "nbd_device": "/dev/nbd0", 00:05:56.274 "bdev_name": "Malloc0" 00:05:56.274 }, 00:05:56.274 { 00:05:56.274 "nbd_device": "/dev/nbd1", 00:05:56.274 "bdev_name": "Malloc1" 00:05:56.274 } 00:05:56.274 ]' 00:05:56.274 20:55:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.274 20:55:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.274 /dev/nbd1' 00:05:56.274 20:55:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.274 /dev/nbd1' 00:05:56.274 20:55:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.274 20:55:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.274 20:55:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.274 20:55:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.274 20:55:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.274 20:55:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.274 20:55:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.274 20:55:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.274 20:55:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.274 20:55:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.274 20:55:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.274 20:55:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.274 256+0 records in 00:05:56.274 256+0 records out 00:05:56.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00512985 s, 204 MB/s 00:05:56.274 20:55:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.274 20:55:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.274 256+0 records in 00:05:56.274 256+0 records out 00:05:56.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245092 s, 42.8 MB/s 00:05:56.274 20:55:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.274 20:55:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.274 256+0 records in 00:05:56.274 256+0 records out 00:05:56.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305083 s, 34.4 MB/s 00:05:56.274 20:55:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.274 20:55:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.274 20:55:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.274 20:55:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.274 20:55:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.274 20:55:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.274 20:55:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.274 20:55:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.274 20:55:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.274 20:55:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.274 20:55:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.274 20:55:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.274 20:55:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.274 20:55:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.274 20:55:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.274 20:55:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.274 20:55:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:56.274 20:55:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.274 20:55:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.840 20:55:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.840 20:55:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.840 20:55:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.840 20:55:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.840 20:55:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.840 20:55:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.840 20:55:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.840 20:55:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.840 20:55:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.840 20:55:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.098 20:55:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.098 20:55:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.098 20:55:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.098 20:55:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.098 20:55:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.098 20:55:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.098 20:55:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.098 20:55:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.098 20:55:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.098 20:55:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.098 20:55:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.355 20:55:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.355 20:55:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.355 20:55:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.355 20:55:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.355 20:55:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.355 20:55:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.355 20:55:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:57.355 20:55:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.355 20:55:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.355 20:55:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.355 20:55:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.355 20:55:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.355 20:55:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.920 20:55:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:58.851 [2024-11-19 20:55:32.612745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.109 [2024-11-19 20:55:32.747439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.109 [2024-11-19 20:55:32.747444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.368 [2024-11-19 20:55:32.961654] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.368 [2024-11-19 20:55:32.961734] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.792 20:55:34 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2854851 /var/tmp/spdk-nbd.sock 00:06:00.792 20:55:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2854851 ']' 00:06:00.792 20:55:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.792 20:55:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.792 20:55:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.792 20:55:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.792 20:55:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.049 20:55:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.049 20:55:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:01.049 20:55:34 event.app_repeat -- event/event.sh@39 -- # killprocess 2854851 00:06:01.049 20:55:34 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2854851 ']' 00:06:01.049 20:55:34 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2854851 00:06:01.049 20:55:34 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:01.049 20:55:34 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.049 20:55:34 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2854851 00:06:01.049 20:55:34 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.049 20:55:34 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.049 20:55:34 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2854851' 00:06:01.049 killing process with pid 2854851 00:06:01.049 20:55:34 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2854851 00:06:01.049 20:55:34 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2854851 00:06:01.984 spdk_app_start is called in Round 0. 00:06:01.984 Shutdown signal received, stop current app iteration 00:06:01.984 Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 reinitialization... 00:06:01.984 spdk_app_start is called in Round 1. 00:06:01.984 Shutdown signal received, stop current app iteration 00:06:01.984 Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 reinitialization... 00:06:01.984 spdk_app_start is called in Round 2. 00:06:01.984 Shutdown signal received, stop current app iteration 00:06:01.984 Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 reinitialization... 00:06:01.984 spdk_app_start is called in Round 3. 00:06:01.984 Shutdown signal received, stop current app iteration 00:06:02.242 20:55:35 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:02.242 20:55:35 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:02.242 00:06:02.242 real 0m21.217s 00:06:02.242 user 0m45.105s 00:06:02.242 sys 0m3.442s 00:06:02.242 20:55:35 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.242 20:55:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.242 ************************************ 00:06:02.242 END TEST app_repeat 00:06:02.242 ************************************ 00:06:02.242 20:55:35 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:02.242 20:55:35 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:02.242 20:55:35 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.242 20:55:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.242 20:55:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.242 ************************************ 00:06:02.242 START TEST cpu_locks 00:06:02.242 ************************************ 00:06:02.242 20:55:35 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:02.242 * Looking for test storage... 00:06:02.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:02.242 20:55:35 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.242 20:55:35 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.242 20:55:35 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.242 20:55:35 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.242 20:55:35 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:02.242 20:55:35 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.242 20:55:35 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.242 --rc genhtml_branch_coverage=1 00:06:02.242 --rc genhtml_function_coverage=1 00:06:02.242 --rc genhtml_legend=1 00:06:02.242 --rc geninfo_all_blocks=1 00:06:02.242 --rc geninfo_unexecuted_blocks=1 00:06:02.243 00:06:02.243 ' 00:06:02.243 20:55:35 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.243 --rc genhtml_branch_coverage=1 00:06:02.243 --rc genhtml_function_coverage=1 00:06:02.243 --rc genhtml_legend=1 00:06:02.243 --rc geninfo_all_blocks=1 00:06:02.243 --rc geninfo_unexecuted_blocks=1 00:06:02.243 00:06:02.243 ' 00:06:02.243 20:55:35 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.243 --rc genhtml_branch_coverage=1 00:06:02.243 --rc genhtml_function_coverage=1 00:06:02.243 --rc genhtml_legend=1 00:06:02.243 --rc geninfo_all_blocks=1 00:06:02.243 --rc geninfo_unexecuted_blocks=1 00:06:02.243 00:06:02.243 ' 00:06:02.243 20:55:35 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.243 --rc genhtml_branch_coverage=1 00:06:02.243 --rc genhtml_function_coverage=1 00:06:02.243 --rc genhtml_legend=1 00:06:02.243 --rc geninfo_all_blocks=1 00:06:02.243 --rc geninfo_unexecuted_blocks=1 00:06:02.243 00:06:02.243 ' 00:06:02.243 20:55:35 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:02.243 20:55:35 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:02.243 20:55:35 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:02.243 20:55:35 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:02.243 20:55:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.243 20:55:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.243 20:55:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.243 ************************************ 00:06:02.243 START TEST default_locks 00:06:02.243 ************************************ 00:06:02.243 20:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:02.243 20:55:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2857606 00:06:02.243 20:55:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.243 20:55:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2857606 00:06:02.243 20:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2857606 ']' 00:06:02.243 20:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.243 20:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.243 20:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.243 20:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.243 20:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.501 [2024-11-19 20:55:36.098327] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:02.501 [2024-11-19 20:55:36.098510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2857606 ] 00:06:02.501 [2024-11-19 20:55:36.245377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.782 [2024-11-19 20:55:36.382615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.717 20:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.717 20:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:03.717 20:55:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2857606 00:06:03.717 20:55:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2857606 00:06:03.717 20:55:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.976 lslocks: write error 00:06:03.976 20:55:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2857606 00:06:03.976 20:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2857606 ']' 00:06:03.976 20:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2857606 00:06:03.976 20:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:03.976 20:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.976 20:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2857606 00:06:03.976 20:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.976 20:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.976 20:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2857606' 00:06:03.976 killing process with pid 2857606 00:06:03.976 20:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2857606 00:06:03.976 20:55:37 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2857606 00:06:06.505 20:55:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2857606 00:06:06.505 20:55:40 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:06.505 20:55:40 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2857606 00:06:06.505 20:55:40 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:06.505 20:55:40 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.505 20:55:40 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:06.505 20:55:40 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.505 20:55:40 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2857606 00:06:06.505 20:55:40 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2857606 ']' 00:06:06.505 20:55:40 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.505 20:55:40 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.505 20:55:40 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.505 20:55:40 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.505 20:55:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2857606) - No such process 00:06:06.505 ERROR: process (pid: 2857606) is no longer running 00:06:06.506 20:55:40 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.506 20:55:40 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:06.506 20:55:40 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:06.506 20:55:40 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.506 20:55:40 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:06.506 20:55:40 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.506 20:55:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:06.506 20:55:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:06.506 20:55:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:06.506 20:55:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:06.506 00:06:06.506 real 0m4.077s 00:06:06.506 user 0m4.054s 00:06:06.506 sys 0m0.754s 00:06:06.506 20:55:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.506 20:55:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.506 ************************************ 00:06:06.506 END TEST default_locks 00:06:06.506 ************************************ 00:06:06.506 20:55:40 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:06.506 20:55:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.506 20:55:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.506 20:55:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.506 ************************************ 00:06:06.506 START TEST default_locks_via_rpc 00:06:06.506 ************************************ 00:06:06.506 20:55:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:06.506 20:55:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2858049 00:06:06.506 20:55:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.506 20:55:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2858049 00:06:06.506 20:55:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2858049 ']' 00:06:06.506 20:55:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.506 20:55:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.506 20:55:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.506 20:55:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.506 20:55:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.506 [2024-11-19 20:55:40.234328] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:06.506 [2024-11-19 20:55:40.234503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858049 ] 00:06:06.764 [2024-11-19 20:55:40.370647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.764 [2024-11-19 20:55:40.503387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.698 20:55:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.698 20:55:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:07.698 20:55:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:07.698 20:55:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.698 20:55:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.698 20:55:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.698 20:55:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:07.698 20:55:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:07.698 20:55:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:07.698 20:55:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:07.698 20:55:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:07.698 20:55:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.698 20:55:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.698 20:55:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.698 20:55:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2858049 00:06:07.698 20:55:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2858049 00:06:07.698 20:55:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.265 20:55:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2858049 00:06:08.265 20:55:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2858049 ']' 00:06:08.265 20:55:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2858049 00:06:08.265 20:55:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:08.265 20:55:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.265 20:55:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2858049 00:06:08.265 20:55:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.265 20:55:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.265 20:55:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2858049' 00:06:08.265 killing process with pid 2858049 00:06:08.265 20:55:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2858049 00:06:08.265 20:55:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2858049 00:06:10.797 00:06:10.797 real 0m4.148s 00:06:10.797 user 0m4.140s 00:06:10.797 sys 0m0.777s 00:06:10.797 20:55:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.797 20:55:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.797 ************************************ 00:06:10.797 END TEST default_locks_via_rpc 00:06:10.797 ************************************ 00:06:10.797 20:55:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:10.797 20:55:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.797 20:55:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.797 20:55:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.797 ************************************ 00:06:10.797 START TEST non_locking_app_on_locked_coremask 00:06:10.797 ************************************ 00:06:10.797 20:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:10.797 20:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2858606 00:06:10.797 20:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.797 20:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2858606 /var/tmp/spdk.sock 00:06:10.797 20:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2858606 ']' 00:06:10.797 20:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.797 20:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.797 20:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.797 20:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.797 20:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.797 [2024-11-19 20:55:44.418535] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:10.797 [2024-11-19 20:55:44.418680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858606 ] 00:06:10.797 [2024-11-19 20:55:44.561412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.056 [2024-11-19 20:55:44.699874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.992 20:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.992 20:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:11.992 20:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2858748 00:06:11.992 20:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:11.992 20:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2858748 /var/tmp/spdk2.sock 00:06:11.992 20:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2858748 ']' 00:06:11.992 20:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.992 20:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.992 20:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.992 20:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.992 20:55:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.992 [2024-11-19 20:55:45.751221] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:11.992 [2024-11-19 20:55:45.751358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858748 ] 00:06:12.250 [2024-11-19 20:55:45.969503] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.250 [2024-11-19 20:55:45.969584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.508 [2024-11-19 20:55:46.243113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.039 20:55:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.039 20:55:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:15.039 20:55:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2858606 00:06:15.039 20:55:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2858606 00:06:15.039 20:55:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.039 lslocks: write error 00:06:15.039 20:55:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2858606 00:06:15.039 20:55:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2858606 ']' 00:06:15.039 20:55:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2858606 00:06:15.039 20:55:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:15.039 20:55:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.039 20:55:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2858606 00:06:15.297 20:55:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.297 20:55:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.297 20:55:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2858606' 00:06:15.297 killing process with pid 2858606 00:06:15.297 20:55:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2858606 00:06:15.297 20:55:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2858606 00:06:20.567 20:55:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2858748 00:06:20.567 20:55:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2858748 ']' 00:06:20.567 20:55:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2858748 00:06:20.567 20:55:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:20.567 20:55:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.567 20:55:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2858748 00:06:20.567 20:55:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.567 20:55:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.567 20:55:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2858748' 00:06:20.567 killing process with pid 2858748 00:06:20.567 20:55:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2858748 00:06:20.567 20:55:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2858748 00:06:22.468 00:06:22.468 real 0m11.795s 00:06:22.468 user 0m12.212s 00:06:22.468 sys 0m1.456s 00:06:22.468 20:55:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.468 20:55:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.468 ************************************ 00:06:22.468 END TEST non_locking_app_on_locked_coremask 00:06:22.468 ************************************ 00:06:22.468 20:55:56 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:22.468 20:55:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.468 20:55:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.468 20:55:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.469 ************************************ 00:06:22.469 START TEST locking_app_on_unlocked_coremask 00:06:22.469 ************************************ 00:06:22.469 20:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:22.469 20:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2860100 00:06:22.469 20:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:22.469 20:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2860100 /var/tmp/spdk.sock 00:06:22.469 20:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2860100 ']' 00:06:22.469 20:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.469 20:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.469 20:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.469 20:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.469 20:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.469 [2024-11-19 20:55:56.261945] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:22.469 [2024-11-19 20:55:56.262108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2860100 ] 00:06:22.727 [2024-11-19 20:55:56.408414] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.727 [2024-11-19 20:55:56.408476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.986 [2024-11-19 20:55:56.546749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.920 20:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.920 20:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:23.920 20:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2860237 00:06:23.920 20:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:23.920 20:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2860237 /var/tmp/spdk2.sock 00:06:23.920 20:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2860237 ']' 00:06:23.920 20:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.920 20:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.920 20:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.920 20:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.920 20:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.920 [2024-11-19 20:55:57.598290] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:23.920 [2024-11-19 20:55:57.598448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2860237 ] 00:06:24.178 [2024-11-19 20:55:57.797051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.437 [2024-11-19 20:55:58.074843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.964 20:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.964 20:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:26.964 20:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2860237 00:06:26.964 20:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2860237 00:06:26.964 20:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.223 lslocks: write error 00:06:27.223 20:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2860100 00:06:27.223 20:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2860100 ']' 00:06:27.223 20:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2860100 00:06:27.223 20:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:27.223 20:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.223 20:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2860100 00:06:27.223 20:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.223 20:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.223 20:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2860100' 00:06:27.223 killing process with pid 2860100 00:06:27.223 20:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2860100 00:06:27.223 20:56:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2860100 00:06:32.489 20:56:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2860237 00:06:32.489 20:56:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2860237 ']' 00:06:32.489 20:56:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2860237 00:06:32.489 20:56:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:32.489 20:56:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.489 20:56:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2860237 00:06:32.489 20:56:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.489 20:56:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.489 20:56:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2860237' 00:06:32.489 killing process with pid 2860237 00:06:32.489 20:56:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2860237 00:06:32.489 20:56:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2860237 00:06:35.019 00:06:35.019 real 0m12.029s 00:06:35.019 user 0m12.446s 00:06:35.019 sys 0m1.500s 00:06:35.019 20:56:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.019 20:56:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.019 ************************************ 00:06:35.019 END TEST locking_app_on_unlocked_coremask 00:06:35.019 ************************************ 00:06:35.019 20:56:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:35.019 20:56:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.019 20:56:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.019 20:56:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.019 ************************************ 00:06:35.019 START TEST locking_app_on_locked_coremask 00:06:35.019 ************************************ 00:06:35.019 20:56:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:35.019 20:56:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2861482 00:06:35.019 20:56:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.019 20:56:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2861482 /var/tmp/spdk.sock 00:06:35.019 20:56:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2861482 ']' 00:06:35.019 20:56:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.019 20:56:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.019 20:56:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.019 20:56:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.019 20:56:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.019 [2024-11-19 20:56:08.342880] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:35.019 [2024-11-19 20:56:08.343027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861482 ] 00:06:35.019 [2024-11-19 20:56:08.478599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.019 [2024-11-19 20:56:08.613022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.953 20:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.953 20:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:35.953 20:56:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2861622 00:06:35.953 20:56:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:35.953 20:56:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2861622 /var/tmp/spdk2.sock 00:06:35.953 20:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:35.953 20:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2861622 /var/tmp/spdk2.sock 00:06:35.953 20:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:35.953 20:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.953 20:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:35.953 20:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.953 20:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2861622 /var/tmp/spdk2.sock 00:06:35.953 20:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2861622 ']' 00:06:35.953 20:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.953 20:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.953 20:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.953 20:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.953 20:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.953 [2024-11-19 20:56:09.688322] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:35.953 [2024-11-19 20:56:09.688505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861622 ] 00:06:36.212 [2024-11-19 20:56:09.901540] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2861482 has claimed it. 00:06:36.212 [2024-11-19 20:56:09.901639] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:36.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2861622) - No such process 00:06:36.779 ERROR: process (pid: 2861622) is no longer running 00:06:36.779 20:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.779 20:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:36.779 20:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:36.779 20:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:36.779 20:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:36.779 20:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:36.779 20:56:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2861482 00:06:36.779 20:56:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2861482 00:06:36.779 20:56:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.038 lslocks: write error 00:06:37.038 20:56:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2861482 00:06:37.038 20:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2861482 ']' 00:06:37.038 20:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2861482 00:06:37.038 20:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:37.038 20:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.038 20:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2861482 00:06:37.038 20:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.038 20:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.038 20:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2861482' 00:06:37.038 killing process with pid 2861482 00:06:37.038 20:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2861482 00:06:37.038 20:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2861482 00:06:39.568 00:06:39.568 real 0m4.835s 00:06:39.568 user 0m5.080s 00:06:39.568 sys 0m0.922s 00:06:39.568 20:56:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.569 20:56:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.569 ************************************ 00:06:39.569 END TEST locking_app_on_locked_coremask 00:06:39.569 ************************************ 00:06:39.569 20:56:13 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:39.569 20:56:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.569 20:56:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.569 20:56:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.569 ************************************ 00:06:39.569 START TEST locking_overlapped_coremask 00:06:39.569 ************************************ 00:06:39.569 20:56:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:39.569 20:56:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2862169 00:06:39.569 20:56:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:39.569 20:56:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2862169 /var/tmp/spdk.sock 00:06:39.569 20:56:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2862169 ']' 00:06:39.569 20:56:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.569 20:56:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.569 20:56:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.569 20:56:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.569 20:56:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.569 [2024-11-19 20:56:13.227492] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:39.569 [2024-11-19 20:56:13.227655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2862169 ] 00:06:39.827 [2024-11-19 20:56:13.375578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.827 [2024-11-19 20:56:13.523679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.827 [2024-11-19 20:56:13.523751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.827 [2024-11-19 20:56:13.523756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.761 20:56:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.761 20:56:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:40.761 20:56:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2862313 00:06:40.761 20:56:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:40.761 20:56:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2862313 /var/tmp/spdk2.sock 00:06:40.761 20:56:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:40.761 20:56:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2862313 /var/tmp/spdk2.sock 00:06:40.761 20:56:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:40.761 20:56:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.761 20:56:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:40.761 20:56:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.761 20:56:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2862313 /var/tmp/spdk2.sock 00:06:40.761 20:56:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2862313 ']' 00:06:40.761 20:56:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.761 20:56:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.761 20:56:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.761 20:56:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.762 20:56:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.019 [2024-11-19 20:56:14.591214] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:41.019 [2024-11-19 20:56:14.591394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2862313 ] 00:06:41.019 [2024-11-19 20:56:14.804423] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2862169 has claimed it. 00:06:41.019 [2024-11-19 20:56:14.804519] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:41.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2862313) - No such process 00:06:41.604 ERROR: process (pid: 2862313) is no longer running 00:06:41.604 20:56:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.604 20:56:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:41.604 20:56:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:41.604 20:56:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:41.604 20:56:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:41.604 20:56:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:41.604 20:56:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:41.604 20:56:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:41.604 20:56:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:41.604 20:56:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:41.604 20:56:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2862169 00:06:41.604 20:56:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2862169 ']' 00:06:41.604 20:56:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2862169 00:06:41.604 20:56:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:41.604 20:56:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.604 20:56:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2862169 00:06:41.604 20:56:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.604 20:56:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.604 20:56:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2862169' 00:06:41.604 killing process with pid 2862169 00:06:41.604 20:56:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2862169 00:06:41.604 20:56:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2862169 00:06:44.173 00:06:44.173 real 0m4.651s 00:06:44.173 user 0m12.661s 00:06:44.173 sys 0m0.782s 00:06:44.173 20:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.173 20:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.173 ************************************ 00:06:44.173 END TEST locking_overlapped_coremask 00:06:44.173 ************************************ 00:06:44.173 20:56:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:44.173 20:56:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.173 20:56:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.173 20:56:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.173 ************************************ 00:06:44.173 START TEST locking_overlapped_coremask_via_rpc 00:06:44.173 ************************************ 00:06:44.173 20:56:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:44.173 20:56:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2862741 00:06:44.173 20:56:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:44.173 20:56:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2862741 /var/tmp/spdk.sock 00:06:44.173 20:56:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2862741 ']' 00:06:44.173 20:56:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.173 20:56:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.173 20:56:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.173 20:56:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.173 20:56:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.173 [2024-11-19 20:56:17.935493] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:44.173 [2024-11-19 20:56:17.935652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2862741 ] 00:06:44.432 [2024-11-19 20:56:18.088265] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.432 [2024-11-19 20:56:18.088330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.690 [2024-11-19 20:56:18.228876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.690 [2024-11-19 20:56:18.228928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.690 [2024-11-19 20:56:18.228938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.624 20:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.625 20:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:45.625 20:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2862890 00:06:45.625 20:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2862890 /var/tmp/spdk2.sock 00:06:45.625 20:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2862890 ']' 00:06:45.625 20:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.625 20:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.625 20:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.625 20:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:45.625 20:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.625 20:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.625 [2024-11-19 20:56:19.299624] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:45.625 [2024-11-19 20:56:19.299779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2862890 ] 00:06:45.883 [2024-11-19 20:56:19.512089] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.883 [2024-11-19 20:56:19.512163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.140 [2024-11-19 20:56:19.807332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.140 [2024-11-19 20:56:19.807384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.140 [2024-11-19 20:56:19.807394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:48.670 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.670 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:48.670 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:48.670 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.670 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.670 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.670 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:48.670 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:48.670 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:48.670 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:48.670 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.670 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:48.670 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.670 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:48.670 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.670 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.670 [2024-11-19 20:56:22.033244] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2862741 has claimed it. 00:06:48.670 request: 00:06:48.670 { 00:06:48.670 "method": "framework_enable_cpumask_locks", 00:06:48.670 "req_id": 1 00:06:48.670 } 00:06:48.670 Got JSON-RPC error response 00:06:48.670 response: 00:06:48.670 { 00:06:48.670 "code": -32603, 00:06:48.670 "message": "Failed to claim CPU core: 2" 00:06:48.670 } 00:06:48.670 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:48.671 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:48.671 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:48.671 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:48.671 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:48.671 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2862741 /var/tmp/spdk.sock 00:06:48.671 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2862741 ']' 00:06:48.671 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.671 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.671 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.671 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.671 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.671 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.671 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:48.671 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2862890 /var/tmp/spdk2.sock 00:06:48.671 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2862890 ']' 00:06:48.671 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.671 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.671 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.671 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.671 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.929 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.929 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:48.929 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:48.929 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:48.929 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:48.929 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:48.929 00:06:48.929 real 0m4.775s 00:06:48.929 user 0m1.647s 00:06:48.929 sys 0m0.261s 00:06:48.929 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.929 20:56:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.929 ************************************ 00:06:48.929 END TEST locking_overlapped_coremask_via_rpc 00:06:48.929 ************************************ 00:06:48.929 20:56:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:48.929 20:56:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2862741 ]] 00:06:48.929 20:56:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2862741 00:06:48.929 20:56:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2862741 ']' 00:06:48.929 20:56:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2862741 00:06:48.929 20:56:22 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:48.929 20:56:22 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.929 20:56:22 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2862741 00:06:48.929 20:56:22 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.929 20:56:22 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.929 20:56:22 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2862741' 00:06:48.929 killing process with pid 2862741 00:06:48.929 20:56:22 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2862741 00:06:48.929 20:56:22 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2862741 00:06:51.458 20:56:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2862890 ]] 00:06:51.458 20:56:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2862890 00:06:51.458 20:56:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2862890 ']' 00:06:51.458 20:56:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2862890 00:06:51.458 20:56:24 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:51.458 20:56:24 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.458 20:56:24 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2862890 00:06:51.458 20:56:24 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:51.458 20:56:24 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:51.458 20:56:24 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2862890' 00:06:51.458 killing process with pid 2862890 00:06:51.458 20:56:24 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2862890 00:06:51.458 20:56:24 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2862890 00:06:53.359 20:56:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:53.359 20:56:27 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:53.359 20:56:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2862741 ]] 00:06:53.359 20:56:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2862741 00:06:53.359 20:56:27 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2862741 ']' 00:06:53.359 20:56:27 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2862741 00:06:53.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2862741) - No such process 00:06:53.359 20:56:27 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2862741 is not found' 00:06:53.359 Process with pid 2862741 is not found 00:06:53.359 20:56:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2862890 ]] 00:06:53.359 20:56:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2862890 00:06:53.359 20:56:27 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2862890 ']' 00:06:53.359 20:56:27 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2862890 00:06:53.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2862890) - No such process 00:06:53.359 20:56:27 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2862890 is not found' 00:06:53.359 Process with pid 2862890 is not found 00:06:53.359 20:56:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:53.359 00:06:53.359 real 0m51.252s 00:06:53.359 user 1m27.839s 00:06:53.359 sys 0m7.763s 00:06:53.359 20:56:27 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.359 20:56:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.359 ************************************ 00:06:53.359 END TEST cpu_locks 00:06:53.359 ************************************ 00:06:53.359 00:06:53.359 real 1m20.744s 00:06:53.359 user 2m25.804s 00:06:53.359 sys 0m12.430s 00:06:53.359 20:56:27 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.359 20:56:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.359 ************************************ 00:06:53.359 END TEST event 00:06:53.359 ************************************ 00:06:53.359 20:56:27 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:53.359 20:56:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.359 20:56:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.359 20:56:27 -- common/autotest_common.sh@10 -- # set +x 00:06:53.359 ************************************ 00:06:53.359 START TEST thread 00:06:53.359 ************************************ 00:06:53.359 20:56:27 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:53.617 * Looking for test storage... 00:06:53.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:53.617 20:56:27 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:53.617 20:56:27 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:53.617 20:56:27 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:53.617 20:56:27 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:53.617 20:56:27 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.617 20:56:27 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.617 20:56:27 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.617 20:56:27 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.617 20:56:27 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.617 20:56:27 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.617 20:56:27 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.617 20:56:27 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.617 20:56:27 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.617 20:56:27 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.617 20:56:27 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.617 20:56:27 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:53.617 20:56:27 thread -- scripts/common.sh@345 -- # : 1 00:06:53.617 20:56:27 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.617 20:56:27 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.617 20:56:27 thread -- scripts/common.sh@365 -- # decimal 1 00:06:53.617 20:56:27 thread -- scripts/common.sh@353 -- # local d=1 00:06:53.617 20:56:27 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.617 20:56:27 thread -- scripts/common.sh@355 -- # echo 1 00:06:53.617 20:56:27 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.617 20:56:27 thread -- scripts/common.sh@366 -- # decimal 2 00:06:53.617 20:56:27 thread -- scripts/common.sh@353 -- # local d=2 00:06:53.617 20:56:27 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.617 20:56:27 thread -- scripts/common.sh@355 -- # echo 2 00:06:53.617 20:56:27 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.617 20:56:27 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.617 20:56:27 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.617 20:56:27 thread -- scripts/common.sh@368 -- # return 0 00:06:53.617 20:56:27 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.617 20:56:27 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:53.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.617 --rc genhtml_branch_coverage=1 00:06:53.617 --rc genhtml_function_coverage=1 00:06:53.617 --rc genhtml_legend=1 00:06:53.617 --rc geninfo_all_blocks=1 00:06:53.617 --rc geninfo_unexecuted_blocks=1 00:06:53.617 00:06:53.617 ' 00:06:53.617 20:56:27 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:53.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.617 --rc genhtml_branch_coverage=1 00:06:53.617 --rc genhtml_function_coverage=1 00:06:53.617 --rc genhtml_legend=1 00:06:53.617 --rc geninfo_all_blocks=1 00:06:53.617 --rc geninfo_unexecuted_blocks=1 00:06:53.617 00:06:53.617 ' 00:06:53.617 20:56:27 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:53.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.617 --rc genhtml_branch_coverage=1 00:06:53.617 --rc genhtml_function_coverage=1 00:06:53.617 --rc genhtml_legend=1 00:06:53.617 --rc geninfo_all_blocks=1 00:06:53.617 --rc geninfo_unexecuted_blocks=1 00:06:53.617 00:06:53.617 ' 00:06:53.617 20:56:27 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:53.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.617 --rc genhtml_branch_coverage=1 00:06:53.617 --rc genhtml_function_coverage=1 00:06:53.617 --rc genhtml_legend=1 00:06:53.617 --rc geninfo_all_blocks=1 00:06:53.617 --rc geninfo_unexecuted_blocks=1 00:06:53.617 00:06:53.617 ' 00:06:53.617 20:56:27 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.617 20:56:27 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:53.617 20:56:27 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.617 20:56:27 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.617 ************************************ 00:06:53.617 START TEST thread_poller_perf 00:06:53.617 ************************************ 00:06:53.617 20:56:27 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.617 [2024-11-19 20:56:27.349749] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:53.617 [2024-11-19 20:56:27.349857] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2863930 ] 00:06:53.876 [2024-11-19 20:56:27.486683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.876 [2024-11-19 20:56:27.624455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.876 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:55.249 [2024-11-19T19:56:29.044Z] ====================================== 00:06:55.249 [2024-11-19T19:56:29.044Z] busy:2718429907 (cyc) 00:06:55.249 [2024-11-19T19:56:29.044Z] total_run_count: 282000 00:06:55.249 [2024-11-19T19:56:29.044Z] tsc_hz: 2700000000 (cyc) 00:06:55.249 [2024-11-19T19:56:29.044Z] ====================================== 00:06:55.249 [2024-11-19T19:56:29.044Z] poller_cost: 9639 (cyc), 3570 (nsec) 00:06:55.249 00:06:55.249 real 0m1.578s 00:06:55.249 user 0m1.414s 00:06:55.249 sys 0m0.155s 00:06:55.249 20:56:28 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.250 20:56:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.250 ************************************ 00:06:55.250 END TEST thread_poller_perf 00:06:55.250 ************************************ 00:06:55.250 20:56:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.250 20:56:28 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:55.250 20:56:28 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.250 20:56:28 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.250 ************************************ 00:06:55.250 START TEST thread_poller_perf 00:06:55.250 ************************************ 00:06:55.250 20:56:28 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.250 [2024-11-19 20:56:28.977980] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:55.250 [2024-11-19 20:56:28.978141] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2864146 ] 00:06:55.519 [2024-11-19 20:56:29.137213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.519 [2024-11-19 20:56:29.275602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.519 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:56.894 [2024-11-19T19:56:30.689Z] ====================================== 00:06:56.894 [2024-11-19T19:56:30.689Z] busy:2705407396 (cyc) 00:06:56.894 [2024-11-19T19:56:30.689Z] total_run_count: 3658000 00:06:56.894 [2024-11-19T19:56:30.689Z] tsc_hz: 2700000000 (cyc) 00:06:56.894 [2024-11-19T19:56:30.689Z] ====================================== 00:06:56.894 [2024-11-19T19:56:30.689Z] poller_cost: 739 (cyc), 273 (nsec) 00:06:56.894 00:06:56.894 real 0m1.589s 00:06:56.894 user 0m1.433s 00:06:56.894 sys 0m0.148s 00:06:56.894 20:56:30 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.894 20:56:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.894 ************************************ 00:06:56.894 END TEST thread_poller_perf 00:06:56.894 ************************************ 00:06:56.894 20:56:30 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:56.894 00:06:56.894 real 0m3.404s 00:06:56.894 user 0m2.997s 00:06:56.894 sys 0m0.404s 00:06:56.894 20:56:30 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.894 20:56:30 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.894 ************************************ 00:06:56.894 END TEST thread 00:06:56.894 ************************************ 00:06:56.894 20:56:30 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:56.894 20:56:30 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:56.894 20:56:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.894 20:56:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.894 20:56:30 -- common/autotest_common.sh@10 -- # set +x 00:06:56.894 ************************************ 00:06:56.894 START TEST app_cmdline 00:06:56.894 ************************************ 00:06:56.894 20:56:30 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:56.894 * Looking for test storage... 00:06:56.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:56.894 20:56:30 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:56.894 20:56:30 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:56.894 20:56:30 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:57.152 20:56:30 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.152 20:56:30 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:57.152 20:56:30 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.152 20:56:30 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:57.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.152 --rc genhtml_branch_coverage=1 00:06:57.152 --rc genhtml_function_coverage=1 00:06:57.152 --rc genhtml_legend=1 00:06:57.152 --rc geninfo_all_blocks=1 00:06:57.152 --rc geninfo_unexecuted_blocks=1 00:06:57.152 00:06:57.152 ' 00:06:57.152 20:56:30 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:57.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.152 --rc genhtml_branch_coverage=1 00:06:57.152 --rc genhtml_function_coverage=1 00:06:57.152 --rc genhtml_legend=1 00:06:57.152 --rc geninfo_all_blocks=1 00:06:57.152 --rc geninfo_unexecuted_blocks=1 00:06:57.152 00:06:57.152 ' 00:06:57.152 20:56:30 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:57.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.152 --rc genhtml_branch_coverage=1 00:06:57.152 --rc genhtml_function_coverage=1 00:06:57.152 --rc genhtml_legend=1 00:06:57.152 --rc geninfo_all_blocks=1 00:06:57.152 --rc geninfo_unexecuted_blocks=1 00:06:57.152 00:06:57.152 ' 00:06:57.152 20:56:30 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:57.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.152 --rc genhtml_branch_coverage=1 00:06:57.152 --rc genhtml_function_coverage=1 00:06:57.152 --rc genhtml_legend=1 00:06:57.152 --rc geninfo_all_blocks=1 00:06:57.152 --rc geninfo_unexecuted_blocks=1 00:06:57.152 00:06:57.153 ' 00:06:57.153 20:56:30 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:57.153 20:56:30 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2864416 00:06:57.153 20:56:30 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:57.153 20:56:30 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2864416 00:06:57.153 20:56:30 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2864416 ']' 00:06:57.153 20:56:30 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.153 20:56:30 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.153 20:56:30 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.153 20:56:30 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.153 20:56:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.153 [2024-11-19 20:56:30.840729] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:06:57.153 [2024-11-19 20:56:30.840878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2864416 ] 00:06:57.411 [2024-11-19 20:56:30.986504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.411 [2024-11-19 20:56:31.123933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.346 20:56:32 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.346 20:56:32 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:58.346 20:56:32 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:58.604 { 00:06:58.604 "version": "SPDK v25.01-pre git sha1 f22e807f1", 00:06:58.604 "fields": { 00:06:58.604 "major": 25, 00:06:58.604 "minor": 1, 00:06:58.604 "patch": 0, 00:06:58.604 "suffix": "-pre", 00:06:58.604 "commit": "f22e807f1" 00:06:58.604 } 00:06:58.604 } 00:06:58.604 20:56:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:58.604 20:56:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:58.604 20:56:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:58.604 20:56:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:58.604 20:56:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:58.604 20:56:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:58.604 20:56:32 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.604 20:56:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:58.604 20:56:32 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:58.604 20:56:32 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.604 20:56:32 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:58.604 20:56:32 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:58.604 20:56:32 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:58.604 20:56:32 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:58.604 20:56:32 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:58.604 20:56:32 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:58.604 20:56:32 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.604 20:56:32 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:58.604 20:56:32 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.604 20:56:32 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:58.604 20:56:32 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:58.604 20:56:32 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:58.604 20:56:32 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:58.604 20:56:32 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:58.862 request: 00:06:58.862 { 00:06:58.862 "method": "env_dpdk_get_mem_stats", 00:06:58.862 "req_id": 1 00:06:58.862 } 00:06:58.862 Got JSON-RPC error response 00:06:58.862 response: 00:06:58.862 { 00:06:58.862 "code": -32601, 00:06:58.862 "message": "Method not found" 00:06:58.862 } 00:06:58.862 20:56:32 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:58.862 20:56:32 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:58.862 20:56:32 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:58.862 20:56:32 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:58.862 20:56:32 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2864416 00:06:58.862 20:56:32 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2864416 ']' 00:06:58.862 20:56:32 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2864416 00:06:58.862 20:56:32 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:58.862 20:56:32 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.862 20:56:32 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2864416 00:06:59.120 20:56:32 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.120 20:56:32 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.120 20:56:32 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2864416' 00:06:59.120 killing process with pid 2864416 00:06:59.120 20:56:32 app_cmdline -- common/autotest_common.sh@973 -- # kill 2864416 00:06:59.121 20:56:32 app_cmdline -- common/autotest_common.sh@978 -- # wait 2864416 00:07:01.651 00:07:01.651 real 0m4.524s 00:07:01.651 user 0m4.917s 00:07:01.651 sys 0m0.732s 00:07:01.651 20:56:35 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.651 20:56:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.651 ************************************ 00:07:01.651 END TEST app_cmdline 00:07:01.651 ************************************ 00:07:01.651 20:56:35 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:01.651 20:56:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.651 20:56:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.651 20:56:35 -- common/autotest_common.sh@10 -- # set +x 00:07:01.651 ************************************ 00:07:01.651 START TEST version 00:07:01.651 ************************************ 00:07:01.651 20:56:35 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:01.651 * Looking for test storage... 00:07:01.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:01.651 20:56:35 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.651 20:56:35 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.651 20:56:35 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.651 20:56:35 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.651 20:56:35 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.651 20:56:35 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.651 20:56:35 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.651 20:56:35 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.651 20:56:35 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.651 20:56:35 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.651 20:56:35 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.651 20:56:35 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.651 20:56:35 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.651 20:56:35 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.651 20:56:35 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.652 20:56:35 version -- scripts/common.sh@344 -- # case "$op" in 00:07:01.652 20:56:35 version -- scripts/common.sh@345 -- # : 1 00:07:01.652 20:56:35 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.652 20:56:35 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.652 20:56:35 version -- scripts/common.sh@365 -- # decimal 1 00:07:01.652 20:56:35 version -- scripts/common.sh@353 -- # local d=1 00:07:01.652 20:56:35 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.652 20:56:35 version -- scripts/common.sh@355 -- # echo 1 00:07:01.652 20:56:35 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.652 20:56:35 version -- scripts/common.sh@366 -- # decimal 2 00:07:01.652 20:56:35 version -- scripts/common.sh@353 -- # local d=2 00:07:01.652 20:56:35 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.652 20:56:35 version -- scripts/common.sh@355 -- # echo 2 00:07:01.652 20:56:35 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.652 20:56:35 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.652 20:56:35 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.652 20:56:35 version -- scripts/common.sh@368 -- # return 0 00:07:01.652 20:56:35 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.652 20:56:35 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.652 --rc genhtml_branch_coverage=1 00:07:01.652 --rc genhtml_function_coverage=1 00:07:01.652 --rc genhtml_legend=1 00:07:01.652 --rc geninfo_all_blocks=1 00:07:01.652 --rc geninfo_unexecuted_blocks=1 00:07:01.652 00:07:01.652 ' 00:07:01.652 20:56:35 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.652 --rc genhtml_branch_coverage=1 00:07:01.652 --rc genhtml_function_coverage=1 00:07:01.652 --rc genhtml_legend=1 00:07:01.652 --rc geninfo_all_blocks=1 00:07:01.652 --rc geninfo_unexecuted_blocks=1 00:07:01.652 00:07:01.652 ' 00:07:01.652 20:56:35 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.652 --rc genhtml_branch_coverage=1 00:07:01.652 --rc genhtml_function_coverage=1 00:07:01.652 --rc genhtml_legend=1 00:07:01.652 --rc geninfo_all_blocks=1 00:07:01.652 --rc geninfo_unexecuted_blocks=1 00:07:01.652 00:07:01.652 ' 00:07:01.652 20:56:35 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.652 --rc genhtml_branch_coverage=1 00:07:01.652 --rc genhtml_function_coverage=1 00:07:01.652 --rc genhtml_legend=1 00:07:01.652 --rc geninfo_all_blocks=1 00:07:01.652 --rc geninfo_unexecuted_blocks=1 00:07:01.652 00:07:01.652 ' 00:07:01.652 20:56:35 version -- app/version.sh@17 -- # get_header_version major 00:07:01.652 20:56:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:01.652 20:56:35 version -- app/version.sh@14 -- # cut -f2 00:07:01.652 20:56:35 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.652 20:56:35 version -- app/version.sh@17 -- # major=25 00:07:01.652 20:56:35 version -- app/version.sh@18 -- # get_header_version minor 00:07:01.652 20:56:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:01.652 20:56:35 version -- app/version.sh@14 -- # cut -f2 00:07:01.652 20:56:35 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.652 20:56:35 version -- app/version.sh@18 -- # minor=1 00:07:01.652 20:56:35 version -- app/version.sh@19 -- # get_header_version patch 00:07:01.652 20:56:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:01.652 20:56:35 version -- app/version.sh@14 -- # cut -f2 00:07:01.652 20:56:35 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.652 20:56:35 version -- app/version.sh@19 -- # patch=0 00:07:01.652 20:56:35 version -- app/version.sh@20 -- # get_header_version suffix 00:07:01.652 20:56:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:01.652 20:56:35 version -- app/version.sh@14 -- # cut -f2 00:07:01.652 20:56:35 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.652 20:56:35 version -- app/version.sh@20 -- # suffix=-pre 00:07:01.652 20:56:35 version -- app/version.sh@22 -- # version=25.1 00:07:01.652 20:56:35 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:01.652 20:56:35 version -- app/version.sh@28 -- # version=25.1rc0 00:07:01.652 20:56:35 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:01.652 20:56:35 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:01.652 20:56:35 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:01.652 20:56:35 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:01.652 00:07:01.652 real 0m0.192s 00:07:01.652 user 0m0.122s 00:07:01.652 sys 0m0.093s 00:07:01.652 20:56:35 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.652 20:56:35 version -- common/autotest_common.sh@10 -- # set +x 00:07:01.652 ************************************ 00:07:01.652 END TEST version 00:07:01.652 ************************************ 00:07:01.652 20:56:35 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:01.652 20:56:35 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:01.652 20:56:35 -- spdk/autotest.sh@194 -- # uname -s 00:07:01.652 20:56:35 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:01.652 20:56:35 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:01.652 20:56:35 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:01.652 20:56:35 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:01.652 20:56:35 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:01.652 20:56:35 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:01.652 20:56:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:01.652 20:56:35 -- common/autotest_common.sh@10 -- # set +x 00:07:01.652 20:56:35 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:01.652 20:56:35 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:01.652 20:56:35 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:01.652 20:56:35 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:01.652 20:56:35 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:01.652 20:56:35 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:01.652 20:56:35 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:01.652 20:56:35 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.652 20:56:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.652 20:56:35 -- common/autotest_common.sh@10 -- # set +x 00:07:01.652 ************************************ 00:07:01.652 START TEST nvmf_tcp 00:07:01.652 ************************************ 00:07:01.652 20:56:35 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:01.911 * Looking for test storage... 00:07:01.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:01.911 20:56:35 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.911 20:56:35 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.911 20:56:35 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.911 20:56:35 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.911 20:56:35 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:01.911 20:56:35 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.911 20:56:35 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.911 --rc genhtml_branch_coverage=1 00:07:01.911 --rc genhtml_function_coverage=1 00:07:01.911 --rc genhtml_legend=1 00:07:01.911 --rc geninfo_all_blocks=1 00:07:01.911 --rc geninfo_unexecuted_blocks=1 00:07:01.911 00:07:01.911 ' 00:07:01.911 20:56:35 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.911 --rc genhtml_branch_coverage=1 00:07:01.911 --rc genhtml_function_coverage=1 00:07:01.911 --rc genhtml_legend=1 00:07:01.911 --rc geninfo_all_blocks=1 00:07:01.911 --rc geninfo_unexecuted_blocks=1 00:07:01.911 00:07:01.911 ' 00:07:01.911 20:56:35 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.911 --rc genhtml_branch_coverage=1 00:07:01.911 --rc genhtml_function_coverage=1 00:07:01.911 --rc genhtml_legend=1 00:07:01.911 --rc geninfo_all_blocks=1 00:07:01.911 --rc geninfo_unexecuted_blocks=1 00:07:01.911 00:07:01.911 ' 00:07:01.911 20:56:35 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.911 --rc genhtml_branch_coverage=1 00:07:01.911 --rc genhtml_function_coverage=1 00:07:01.911 --rc genhtml_legend=1 00:07:01.911 --rc geninfo_all_blocks=1 00:07:01.911 --rc geninfo_unexecuted_blocks=1 00:07:01.911 00:07:01.911 ' 00:07:01.912 20:56:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:01.912 20:56:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:01.912 20:56:35 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:01.912 20:56:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.912 20:56:35 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.912 20:56:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.912 ************************************ 00:07:01.912 START TEST nvmf_target_core 00:07:01.912 ************************************ 00:07:01.912 20:56:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:01.912 * Looking for test storage... 00:07:01.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:01.912 20:56:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.912 20:56:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.912 20:56:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:02.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.172 --rc genhtml_branch_coverage=1 00:07:02.172 --rc genhtml_function_coverage=1 00:07:02.172 --rc genhtml_legend=1 00:07:02.172 --rc geninfo_all_blocks=1 00:07:02.172 --rc geninfo_unexecuted_blocks=1 00:07:02.172 00:07:02.172 ' 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:02.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.172 --rc genhtml_branch_coverage=1 00:07:02.172 --rc genhtml_function_coverage=1 00:07:02.172 --rc genhtml_legend=1 00:07:02.172 --rc geninfo_all_blocks=1 00:07:02.172 --rc geninfo_unexecuted_blocks=1 00:07:02.172 00:07:02.172 ' 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:02.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.172 --rc genhtml_branch_coverage=1 00:07:02.172 --rc genhtml_function_coverage=1 00:07:02.172 --rc genhtml_legend=1 00:07:02.172 --rc geninfo_all_blocks=1 00:07:02.172 --rc geninfo_unexecuted_blocks=1 00:07:02.172 00:07:02.172 ' 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:02.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.172 --rc genhtml_branch_coverage=1 00:07:02.172 --rc genhtml_function_coverage=1 00:07:02.172 --rc genhtml_legend=1 00:07:02.172 --rc geninfo_all_blocks=1 00:07:02.172 --rc geninfo_unexecuted_blocks=1 00:07:02.172 00:07:02.172 ' 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.172 20:56:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:02.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:02.173 ************************************ 00:07:02.173 START TEST nvmf_abort 00:07:02.173 ************************************ 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:02.173 * Looking for test storage... 00:07:02.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:02.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.173 --rc genhtml_branch_coverage=1 00:07:02.173 --rc genhtml_function_coverage=1 00:07:02.173 --rc genhtml_legend=1 00:07:02.173 --rc geninfo_all_blocks=1 00:07:02.173 --rc geninfo_unexecuted_blocks=1 00:07:02.173 00:07:02.173 ' 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:02.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.173 --rc genhtml_branch_coverage=1 00:07:02.173 --rc genhtml_function_coverage=1 00:07:02.173 --rc genhtml_legend=1 00:07:02.173 --rc geninfo_all_blocks=1 00:07:02.173 --rc geninfo_unexecuted_blocks=1 00:07:02.173 00:07:02.173 ' 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:02.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.173 --rc genhtml_branch_coverage=1 00:07:02.173 --rc genhtml_function_coverage=1 00:07:02.173 --rc genhtml_legend=1 00:07:02.173 --rc geninfo_all_blocks=1 00:07:02.173 --rc geninfo_unexecuted_blocks=1 00:07:02.173 00:07:02.173 ' 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:02.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.173 --rc genhtml_branch_coverage=1 00:07:02.173 --rc genhtml_function_coverage=1 00:07:02.173 --rc genhtml_legend=1 00:07:02.173 --rc geninfo_all_blocks=1 00:07:02.173 --rc geninfo_unexecuted_blocks=1 00:07:02.173 00:07:02.173 ' 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.173 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:02.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:02.174 20:56:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:04.709 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:04.709 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:04.710 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:04.710 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:04.710 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:04.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:07:04.710 00:07:04.710 --- 10.0.0.2 ping statistics --- 00:07:04.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.710 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:04.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:07:04.710 00:07:04.710 --- 10.0.0.1 ping statistics --- 00:07:04.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.710 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2866893 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2866893 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2866893 ']' 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.710 20:56:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:04.710 [2024-11-19 20:56:38.343392] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:04.710 [2024-11-19 20:56:38.343548] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.710 [2024-11-19 20:56:38.498746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.969 [2024-11-19 20:56:38.644056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.969 [2024-11-19 20:56:38.644157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.969 [2024-11-19 20:56:38.644183] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:04.969 [2024-11-19 20:56:38.644208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:04.969 [2024-11-19 20:56:38.644229] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.969 [2024-11-19 20:56:38.646947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.969 [2024-11-19 20:56:38.646998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.969 [2024-11-19 20:56:38.647005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.535 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.535 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:05.535 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:05.535 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:05.535 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.535 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.535 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:05.535 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.535 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.535 [2024-11-19 20:56:39.326294] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.793 Malloc0 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.793 Delay0 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.793 [2024-11-19 20:56:39.457462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.793 20:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:06.051 [2024-11-19 20:56:39.614257] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:07.950 Initializing NVMe Controllers 00:07:07.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:07.950 controller IO queue size 128 less than required 00:07:07.950 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:07.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:07.950 Initialization complete. Launching workers. 00:07:07.950 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 22325 00:07:07.950 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 22382, failed to submit 66 00:07:07.950 success 22325, unsuccessful 57, failed 0 00:07:07.950 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:07.950 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.950 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:07.950 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.950 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:07.950 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:07.950 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:07.950 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:07.950 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:07.950 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:07.950 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:07.950 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:07.950 rmmod nvme_tcp 00:07:07.950 rmmod nvme_fabrics 00:07:07.950 rmmod nvme_keyring 00:07:08.209 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:08.209 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:08.209 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:08.209 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2866893 ']' 00:07:08.209 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2866893 00:07:08.209 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2866893 ']' 00:07:08.209 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2866893 00:07:08.209 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:08.209 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.209 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2866893 00:07:08.209 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:08.209 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:08.209 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2866893' 00:07:08.209 killing process with pid 2866893 00:07:08.209 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2866893 00:07:08.209 20:56:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2866893 00:07:09.581 20:56:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:09.581 20:56:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:09.581 20:56:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:09.581 20:56:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:09.581 20:56:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:09.581 20:56:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:09.581 20:56:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:09.581 20:56:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:09.581 20:56:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:09.581 20:56:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.581 20:56:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.582 20:56:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:11.485 00:07:11.485 real 0m9.310s 00:07:11.485 user 0m15.047s 00:07:11.485 sys 0m2.871s 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:11.485 ************************************ 00:07:11.485 END TEST nvmf_abort 00:07:11.485 ************************************ 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:11.485 ************************************ 00:07:11.485 START TEST nvmf_ns_hotplug_stress 00:07:11.485 ************************************ 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:11.485 * Looking for test storage... 00:07:11.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:11.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.485 --rc genhtml_branch_coverage=1 00:07:11.485 --rc genhtml_function_coverage=1 00:07:11.485 --rc genhtml_legend=1 00:07:11.485 --rc geninfo_all_blocks=1 00:07:11.485 --rc geninfo_unexecuted_blocks=1 00:07:11.485 00:07:11.485 ' 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:11.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.485 --rc genhtml_branch_coverage=1 00:07:11.485 --rc genhtml_function_coverage=1 00:07:11.485 --rc genhtml_legend=1 00:07:11.485 --rc geninfo_all_blocks=1 00:07:11.485 --rc geninfo_unexecuted_blocks=1 00:07:11.485 00:07:11.485 ' 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:11.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.485 --rc genhtml_branch_coverage=1 00:07:11.485 --rc genhtml_function_coverage=1 00:07:11.485 --rc genhtml_legend=1 00:07:11.485 --rc geninfo_all_blocks=1 00:07:11.485 --rc geninfo_unexecuted_blocks=1 00:07:11.485 00:07:11.485 ' 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:11.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.485 --rc genhtml_branch_coverage=1 00:07:11.485 --rc genhtml_function_coverage=1 00:07:11.485 --rc genhtml_legend=1 00:07:11.485 --rc geninfo_all_blocks=1 00:07:11.485 --rc geninfo_unexecuted_blocks=1 00:07:11.485 00:07:11.485 ' 00:07:11.485 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.744 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:11.744 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.744 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.744 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.744 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.744 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.744 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:11.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:11.745 20:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:13.647 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:13.647 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:13.647 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:13.647 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.647 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:13.648 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:13.648 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:13.648 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:13.648 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:13.648 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:13.648 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:13.648 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.648 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:13.648 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:13.648 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:13.648 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:13.906 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:13.906 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:13.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:07:13.907 00:07:13.907 --- 10.0.0.2 ping statistics --- 00:07:13.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.907 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:13.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:07:13.907 00:07:13.907 --- 10.0.0.1 ping statistics --- 00:07:13.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.907 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2869407 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2869407 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2869407 ']' 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.907 20:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:13.907 [2024-11-19 20:56:47.663219] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:07:13.907 [2024-11-19 20:56:47.663389] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.165 [2024-11-19 20:56:47.829359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.423 [2024-11-19 20:56:47.975050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:14.423 [2024-11-19 20:56:47.975141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:14.423 [2024-11-19 20:56:47.975175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:14.423 [2024-11-19 20:56:47.975205] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:14.423 [2024-11-19 20:56:47.975226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:14.423 [2024-11-19 20:56:47.977950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.424 [2024-11-19 20:56:47.980109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.424 [2024-11-19 20:56:47.980131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.989 20:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.989 20:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:14.989 20:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:14.989 20:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:14.989 20:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:14.989 20:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.989 20:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:14.989 20:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:15.246 [2024-11-19 20:56:49.027184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.504 20:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:15.761 20:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.019 [2024-11-19 20:56:49.609309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.019 20:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:16.277 20:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:16.535 Malloc0 00:07:16.535 20:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:16.793 Delay0 00:07:16.793 20:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.051 20:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:17.309 NULL1 00:07:17.309 20:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:17.594 20:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2869835 00:07:17.594 20:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:17.594 20:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:17.594 20:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.991 Read completed with error (sct=0, sc=11) 00:07:18.991 20:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.248 20:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:19.248 20:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:19.506 true 00:07:19.506 20:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:19.506 20:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.439 20:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.439 20:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:20.439 20:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:20.697 true 00:07:20.697 20:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:20.697 20:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.956 20:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.215 20:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:21.215 20:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:21.473 true 00:07:21.473 20:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:21.473 20:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.730 20:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.989 20:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:21.989 20:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:22.247 true 00:07:22.247 20:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:22.247 20:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.621 20:56:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.622 20:56:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:23.622 20:56:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:23.879 true 00:07:23.879 20:56:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:23.879 20:56:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.136 20:56:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.394 20:56:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:24.394 20:56:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:24.652 true 00:07:24.652 20:56:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:24.652 20:56:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.219 20:56:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.219 20:56:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:25.219 20:56:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:25.477 true 00:07:25.477 20:56:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:25.477 20:56:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.851 20:57:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.851 20:57:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:26.851 20:57:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:27.109 true 00:07:27.109 20:57:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:27.109 20:57:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.366 20:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.623 20:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:27.623 20:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:27.880 true 00:07:28.137 20:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:28.137 20:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.701 20:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.958 20:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:28.958 20:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:29.216 true 00:07:29.216 20:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:29.216 20:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.781 20:57:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.781 20:57:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:29.781 20:57:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:30.039 true 00:07:30.039 20:57:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:30.039 20:57:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.605 20:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.605 20:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:30.605 20:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:30.863 true 00:07:30.863 20:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:30.863 20:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.797 20:57:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.054 20:57:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:32.054 20:57:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:32.620 true 00:07:32.620 20:57:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:32.620 20:57:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.620 20:57:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.878 20:57:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:32.878 20:57:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:33.444 true 00:07:33.444 20:57:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:33.444 20:57:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.702 20:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.960 20:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:33.960 20:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:34.218 true 00:07:34.218 20:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:34.218 20:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.151 20:57:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.409 20:57:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:35.409 20:57:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:35.666 true 00:07:35.666 20:57:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:35.666 20:57:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.923 20:57:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.181 20:57:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:36.181 20:57:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:36.439 true 00:07:36.439 20:57:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:36.439 20:57:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.373 20:57:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.373 20:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:37.373 20:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:37.631 true 00:07:37.631 20:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:37.631 20:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.196 20:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.453 20:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:38.453 20:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:38.453 true 00:07:38.711 20:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:38.711 20:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.969 20:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.226 20:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:39.226 20:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:39.484 true 00:07:39.484 20:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:39.484 20:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.417 20:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.417 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.706 20:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:40.706 20:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:40.998 true 00:07:40.998 20:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:40.998 20:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.998 20:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.564 20:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:41.564 20:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:41.564 true 00:07:41.564 20:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:41.564 20:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.822 20:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.079 20:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:42.079 20:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:42.337 true 00:07:42.595 20:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:42.595 20:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.529 20:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.786 20:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:43.786 20:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:44.044 true 00:07:44.044 20:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:44.044 20:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.302 20:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.559 20:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:44.559 20:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:44.818 true 00:07:44.818 20:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:44.818 20:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.752 20:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.010 20:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:46.010 20:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:46.268 true 00:07:46.268 20:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:46.268 20:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.525 20:57:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.783 20:57:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:46.783 20:57:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:47.041 true 00:07:47.041 20:57:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:47.041 20:57:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.975 20:57:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.976 Initializing NVMe Controllers 00:07:47.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:47.976 Controller IO queue size 128, less than required. 00:07:47.976 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:47.976 Controller IO queue size 128, less than required. 00:07:47.976 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:47.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:47.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:47.976 Initialization complete. Launching workers. 00:07:47.976 ======================================================== 00:07:47.976 Latency(us) 00:07:47.976 Device Information : IOPS MiB/s Average min max 00:07:47.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 606.46 0.30 95384.05 3738.45 1016255.39 00:07:47.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7117.33 3.48 17986.75 4362.06 483858.02 00:07:47.976 ======================================================== 00:07:47.976 Total : 7723.79 3.77 24063.87 3738.45 1016255.39 00:07:47.976 00:07:48.233 20:57:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:48.234 20:57:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:48.491 true 00:07:48.491 20:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2869835 00:07:48.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2869835) - No such process 00:07:48.491 20:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2869835 00:07:48.491 20:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.749 20:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:49.006 20:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:49.007 20:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:49.007 20:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:49.007 20:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.007 20:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:49.265 null0 00:07:49.265 20:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.265 20:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.265 20:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:49.524 null1 00:07:49.524 20:57:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.524 20:57:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.524 20:57:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:49.782 null2 00:07:49.782 20:57:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.782 20:57:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.782 20:57:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:50.039 null3 00:07:50.039 20:57:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:50.039 20:57:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:50.039 20:57:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:50.297 null4 00:07:50.297 20:57:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:50.297 20:57:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:50.297 20:57:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:50.555 null5 00:07:50.555 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:50.555 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:50.555 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:50.812 null6 00:07:50.812 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:50.812 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:50.812 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:51.071 null7 00:07:51.071 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:51.071 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:51.071 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:51.071 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:51.071 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2874519 2874520 2874522 2874524 2874526 2874528 2874530 2874532 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.072 20:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:51.331 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:51.331 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.331 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:51.331 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:51.331 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:51.331 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:51.331 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:51.331 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:51.898 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.898 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.898 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:51.898 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.898 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.898 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:51.898 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.898 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.898 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:51.898 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.898 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.898 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:51.898 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.898 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.898 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:51.898 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.898 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.898 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:51.898 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.898 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.899 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:51.899 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.899 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.899 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:51.899 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:52.158 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:52.158 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:52.158 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:52.158 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.158 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:52.158 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:52.158 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:52.416 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.416 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.416 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:52.416 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.416 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.416 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:52.416 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.416 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.416 20:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:52.416 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.416 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.416 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:52.416 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.416 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.416 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.416 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.417 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:52.417 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:52.417 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.417 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.417 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.417 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.417 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:52.417 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:52.675 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:52.675 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:52.675 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:52.675 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:52.675 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.675 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:52.675 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:52.675 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.934 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:53.193 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.193 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:53.193 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.193 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:53.193 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.193 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:53.193 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:53.193 20:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:53.451 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.451 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.451 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:53.451 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.451 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.451 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:53.451 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.451 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.451 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:53.451 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.451 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.451 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:53.451 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.451 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.451 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:53.451 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.451 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.451 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.451 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.451 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:53.451 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:53.452 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.452 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.452 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:53.710 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.967 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:53.967 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.967 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:53.967 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:53.968 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.968 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:53.968 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.225 20:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.484 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.484 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.484 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.484 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.484 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.484 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.484 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.484 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.741 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.741 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.741 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.741 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.741 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.742 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.742 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.742 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.742 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.742 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.742 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.742 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.742 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.742 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.742 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.742 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.742 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.742 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.742 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.742 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.742 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.742 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.742 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.742 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.000 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.000 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.000 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.000 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.000 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.000 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.000 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.000 20:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.258 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.258 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.258 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.258 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.258 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.259 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.259 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.259 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.259 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.259 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.259 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.259 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.259 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.259 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.259 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.259 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.259 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.259 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.259 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.259 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.259 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.259 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.259 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.259 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.517 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.517 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.776 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.776 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.776 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.776 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.776 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.776 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:56.034 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.034 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.034 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:56.034 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.034 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.034 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:56.034 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.034 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.034 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:56.034 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.034 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.034 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.034 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.034 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:56.034 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:56.034 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.034 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.034 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:56.034 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.035 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.035 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:56.035 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.035 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.035 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:56.292 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.292 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:56.292 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:56.292 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:56.292 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.292 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:56.292 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:56.292 20:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.551 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:56.810 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:56.810 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.810 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:56.810 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:56.810 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.810 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.810 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:56.810 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.068 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.068 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.068 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.069 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.069 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.069 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.069 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.069 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.069 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.069 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.069 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.069 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.069 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.069 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.069 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.069 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.069 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:57.069 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:57.069 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:57.069 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:57.069 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:57.069 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:57.069 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:57.069 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:57.069 rmmod nvme_tcp 00:07:57.069 rmmod nvme_fabrics 00:07:57.327 rmmod nvme_keyring 00:07:57.327 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:57.327 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:57.327 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:57.327 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2869407 ']' 00:07:57.327 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2869407 00:07:57.327 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2869407 ']' 00:07:57.327 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2869407 00:07:57.327 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:57.327 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.327 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2869407 00:07:57.327 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:57.327 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:57.327 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2869407' 00:07:57.327 killing process with pid 2869407 00:07:57.327 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2869407 00:07:57.327 20:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2869407 00:07:58.703 20:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:58.703 20:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:58.703 20:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:58.703 20:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:58.703 20:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:58.703 20:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:58.703 20:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:58.703 20:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:58.703 20:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:58.703 20:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.703 20:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.703 20:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:00.608 00:08:00.608 real 0m49.005s 00:08:00.608 user 3m43.430s 00:08:00.608 sys 0m16.542s 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:00.608 ************************************ 00:08:00.608 END TEST nvmf_ns_hotplug_stress 00:08:00.608 ************************************ 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:00.608 ************************************ 00:08:00.608 START TEST nvmf_delete_subsystem 00:08:00.608 ************************************ 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:00.608 * Looking for test storage... 00:08:00.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:00.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.608 --rc genhtml_branch_coverage=1 00:08:00.608 --rc genhtml_function_coverage=1 00:08:00.608 --rc genhtml_legend=1 00:08:00.608 --rc geninfo_all_blocks=1 00:08:00.608 --rc geninfo_unexecuted_blocks=1 00:08:00.608 00:08:00.608 ' 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:00.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.608 --rc genhtml_branch_coverage=1 00:08:00.608 --rc genhtml_function_coverage=1 00:08:00.608 --rc genhtml_legend=1 00:08:00.608 --rc geninfo_all_blocks=1 00:08:00.608 --rc geninfo_unexecuted_blocks=1 00:08:00.608 00:08:00.608 ' 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:00.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.608 --rc genhtml_branch_coverage=1 00:08:00.608 --rc genhtml_function_coverage=1 00:08:00.608 --rc genhtml_legend=1 00:08:00.608 --rc geninfo_all_blocks=1 00:08:00.608 --rc geninfo_unexecuted_blocks=1 00:08:00.608 00:08:00.608 ' 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:00.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.608 --rc genhtml_branch_coverage=1 00:08:00.608 --rc genhtml_function_coverage=1 00:08:00.608 --rc genhtml_legend=1 00:08:00.608 --rc geninfo_all_blocks=1 00:08:00.608 --rc geninfo_unexecuted_blocks=1 00:08:00.608 00:08:00.608 ' 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.608 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:00.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:00.609 20:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:03.144 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:03.144 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.144 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:03.144 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:03.145 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:03.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:08:03.145 00:08:03.145 --- 10.0.0.2 ping statistics --- 00:08:03.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.145 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:08:03.145 00:08:03.145 --- 10.0.0.1 ping statistics --- 00:08:03.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.145 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2877551 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2877551 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2877551 ']' 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.145 20:57:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.145 [2024-11-19 20:57:36.597638] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:08:03.145 [2024-11-19 20:57:36.597802] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.145 [2024-11-19 20:57:36.763667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:03.145 [2024-11-19 20:57:36.903494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.145 [2024-11-19 20:57:36.903588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.145 [2024-11-19 20:57:36.903615] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.145 [2024-11-19 20:57:36.903640] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.145 [2024-11-19 20:57:36.903660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.145 [2024-11-19 20:57:36.906294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.145 [2024-11-19 20:57:36.906295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.081 [2024-11-19 20:57:37.551214] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.081 [2024-11-19 20:57:37.568383] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.081 NULL1 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.081 Delay0 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2877701 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:04.081 20:57:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:04.081 [2024-11-19 20:57:37.703374] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:05.980 20:57:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.980 20:57:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.980 20:57:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 [2024-11-19 20:57:39.840941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 starting I/O failed: -6 00:08:06.239 [2024-11-19 20:57:39.842777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Write completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.239 Read completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Write completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Write completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Write completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Write completed with error (sct=0, sc=8) 00:08:06.240 Write completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Write completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Write completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Write completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Write completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Write completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Write completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 Read completed with error (sct=0, sc=8) 00:08:06.240 [2024-11-19 20:57:39.844012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:08:07.194 [2024-11-19 20:57:40.801985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015c00 is same with the state(6) to be set 00:08:07.194 Read completed with error (sct=0, sc=8) 00:08:07.194 Write completed with error (sct=0, sc=8) 00:08:07.194 Write completed with error (sct=0, sc=8) 00:08:07.194 Read completed with error (sct=0, sc=8) 00:08:07.194 Read completed with error (sct=0, sc=8) 00:08:07.194 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 [2024-11-19 20:57:40.844081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 [2024-11-19 20:57:40.846451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 [2024-11-19 20:57:40.847394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(6) to be set 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Read completed with error (sct=0, sc=8) 00:08:07.195 Write completed with error (sct=0, sc=8) 00:08:07.195 [2024-11-19 20:57:40.848169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016b00 is same with the state(6) to be set 00:08:07.195 Initializing NVMe Controllers 00:08:07.195 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:07.195 Controller IO queue size 128, less than required. 00:08:07.195 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:07.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:07.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:07.195 Initialization complete. Launching workers. 00:08:07.195 ======================================================== 00:08:07.195 Latency(us) 00:08:07.195 Device Information : IOPS MiB/s Average min max 00:08:07.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.97 0.09 964874.12 2015.31 1017377.27 00:08:07.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.64 0.08 879911.18 683.53 1017214.73 00:08:07.195 ======================================================== 00:08:07.195 Total : 329.61 0.16 925011.81 683.53 1017377.27 00:08:07.195 00:08:07.195 [2024-11-19 20:57:40.853037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015c00 (9): Bad file descriptor 00:08:07.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:07.195 20:57:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.195 20:57:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:07.195 20:57:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2877701 00:08:07.195 20:57:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2877701 00:08:07.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2877701) - No such process 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2877701 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2877701 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2877701 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.763 [2024-11-19 20:57:41.374733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2878121 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2878121 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:07.763 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:07.763 [2024-11-19 20:57:41.492460] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:08.328 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:08.328 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2878121 00:08:08.328 20:57:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:08.894 20:57:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:08.894 20:57:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2878121 00:08:08.894 20:57:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:09.152 20:57:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:09.152 20:57:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2878121 00:08:09.152 20:57:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:09.719 20:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:09.719 20:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2878121 00:08:09.719 20:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:10.285 20:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:10.285 20:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2878121 00:08:10.285 20:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:10.851 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:10.851 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2878121 00:08:10.851 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:11.109 Initializing NVMe Controllers 00:08:11.109 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:11.109 Controller IO queue size 128, less than required. 00:08:11.109 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:11.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:11.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:11.109 Initialization complete. Launching workers. 00:08:11.109 ======================================================== 00:08:11.109 Latency(us) 00:08:11.109 Device Information : IOPS MiB/s Average min max 00:08:11.109 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1006776.87 1000298.59 1044361.06 00:08:11.109 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006051.14 1000236.17 1040763.28 00:08:11.109 ======================================================== 00:08:11.109 Total : 256.00 0.12 1006414.00 1000236.17 1044361.06 00:08:11.110 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2878121 00:08:11.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2878121) - No such process 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2878121 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:11.368 rmmod nvme_tcp 00:08:11.368 rmmod nvme_fabrics 00:08:11.368 rmmod nvme_keyring 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2877551 ']' 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2877551 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2877551 ']' 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2877551 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2877551 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2877551' 00:08:11.368 killing process with pid 2877551 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2877551 00:08:11.368 20:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2877551 00:08:12.744 20:57:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:12.744 20:57:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:12.744 20:57:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:12.744 20:57:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:12.744 20:57:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:12.744 20:57:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:12.744 20:57:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:12.744 20:57:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:12.744 20:57:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:12.744 20:57:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.744 20:57:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.744 20:57:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:14.654 00:08:14.654 real 0m13.987s 00:08:14.654 user 0m30.791s 00:08:14.654 sys 0m3.247s 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.654 ************************************ 00:08:14.654 END TEST nvmf_delete_subsystem 00:08:14.654 ************************************ 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:14.654 ************************************ 00:08:14.654 START TEST nvmf_host_management 00:08:14.654 ************************************ 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:14.654 * Looking for test storage... 00:08:14.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:14.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.654 --rc genhtml_branch_coverage=1 00:08:14.654 --rc genhtml_function_coverage=1 00:08:14.654 --rc genhtml_legend=1 00:08:14.654 --rc geninfo_all_blocks=1 00:08:14.654 --rc geninfo_unexecuted_blocks=1 00:08:14.654 00:08:14.654 ' 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:14.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.654 --rc genhtml_branch_coverage=1 00:08:14.654 --rc genhtml_function_coverage=1 00:08:14.654 --rc genhtml_legend=1 00:08:14.654 --rc geninfo_all_blocks=1 00:08:14.654 --rc geninfo_unexecuted_blocks=1 00:08:14.654 00:08:14.654 ' 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:14.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.654 --rc genhtml_branch_coverage=1 00:08:14.654 --rc genhtml_function_coverage=1 00:08:14.654 --rc genhtml_legend=1 00:08:14.654 --rc geninfo_all_blocks=1 00:08:14.654 --rc geninfo_unexecuted_blocks=1 00:08:14.654 00:08:14.654 ' 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:14.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.654 --rc genhtml_branch_coverage=1 00:08:14.654 --rc genhtml_function_coverage=1 00:08:14.654 --rc genhtml_legend=1 00:08:14.654 --rc geninfo_all_blocks=1 00:08:14.654 --rc geninfo_unexecuted_blocks=1 00:08:14.654 00:08:14.654 ' 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.654 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:14.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:14.655 20:57:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:16.556 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:16.557 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:16.557 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:16.557 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:16.557 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:16.557 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:16.816 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:16.816 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.816 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:16.816 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:16.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:08:16.816 00:08:16.816 --- 10.0.0.2 ping statistics --- 00:08:16.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.816 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:08:16.816 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:08:16.816 00:08:16.816 --- 10.0.0.1 ping statistics --- 00:08:16.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.816 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:16.816 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.816 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:16.816 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:16.816 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.816 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:16.816 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:16.816 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.816 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:16.816 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:16.816 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:16.816 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:16.816 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:16.816 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:16.816 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:16.816 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.816 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2880618 00:08:16.817 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:16.817 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2880618 00:08:16.817 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2880618 ']' 00:08:16.817 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.817 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.817 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.817 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.817 20:57:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.817 [2024-11-19 20:57:50.495284] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:08:16.817 [2024-11-19 20:57:50.495439] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.075 [2024-11-19 20:57:50.651379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.075 [2024-11-19 20:57:50.793400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.075 [2024-11-19 20:57:50.793488] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.075 [2024-11-19 20:57:50.793509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.075 [2024-11-19 20:57:50.793529] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.075 [2024-11-19 20:57:50.793545] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.075 [2024-11-19 20:57:50.796201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.075 [2024-11-19 20:57:50.796266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.075 [2024-11-19 20:57:50.796312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.075 [2024-11-19 20:57:50.796318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.012 [2024-11-19 20:57:51.471928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.012 Malloc0 00:08:18.012 [2024-11-19 20:57:51.596840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2880793 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2880793 /var/tmp/bdevperf.sock 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2880793 ']' 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:18.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:18.012 { 00:08:18.012 "params": { 00:08:18.012 "name": "Nvme$subsystem", 00:08:18.012 "trtype": "$TEST_TRANSPORT", 00:08:18.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.012 "adrfam": "ipv4", 00:08:18.012 "trsvcid": "$NVMF_PORT", 00:08:18.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.012 "hdgst": ${hdgst:-false}, 00:08:18.012 "ddgst": ${ddgst:-false} 00:08:18.012 }, 00:08:18.012 "method": "bdev_nvme_attach_controller" 00:08:18.012 } 00:08:18.012 EOF 00:08:18.012 )") 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:18.012 20:57:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:18.012 "params": { 00:08:18.012 "name": "Nvme0", 00:08:18.012 "trtype": "tcp", 00:08:18.012 "traddr": "10.0.0.2", 00:08:18.012 "adrfam": "ipv4", 00:08:18.012 "trsvcid": "4420", 00:08:18.012 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:18.012 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:18.012 "hdgst": false, 00:08:18.012 "ddgst": false 00:08:18.012 }, 00:08:18.012 "method": "bdev_nvme_attach_controller" 00:08:18.012 }' 00:08:18.012 [2024-11-19 20:57:51.717365] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:08:18.012 [2024-11-19 20:57:51.717517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880793 ] 00:08:18.271 [2024-11-19 20:57:51.856256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.271 [2024-11-19 20:57:51.984900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.839 Running I/O for 10 seconds... 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=195 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 195 -ge 100 ']' 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.100 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.100 [2024-11-19 20:57:52.748884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.100 [2024-11-19 20:57:52.748986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.100 [2024-11-19 20:57:52.749031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.100 [2024-11-19 20:57:52.749057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.100 [2024-11-19 20:57:52.749095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.100 [2024-11-19 20:57:52.749127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.100 [2024-11-19 20:57:52.749153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.100 [2024-11-19 20:57:52.749176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.100 [2024-11-19 20:57:52.749201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.100 [2024-11-19 20:57:52.749224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.100 [2024-11-19 20:57:52.749249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.100 [2024-11-19 20:57:52.749272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.100 [2024-11-19 20:57:52.749298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.100 [2024-11-19 20:57:52.749321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.100 [2024-11-19 20:57:52.749375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.100 [2024-11-19 20:57:52.749398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.100 [2024-11-19 20:57:52.749424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.100 [2024-11-19 20:57:52.749456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.100 [2024-11-19 20:57:52.749481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.100 [2024-11-19 20:57:52.749503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.100 [2024-11-19 20:57:52.749527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.100 [2024-11-19 20:57:52.749549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.100 [2024-11-19 20:57:52.749585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.100 [2024-11-19 20:57:52.749609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.100 [2024-11-19 20:57:52.749634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.100 [2024-11-19 20:57:52.749656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.100 [2024-11-19 20:57:52.749680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.100 [2024-11-19 20:57:52.749702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.100 [2024-11-19 20:57:52.749727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.100 [2024-11-19 20:57:52.749749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.100 [2024-11-19 20:57:52.749774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.100 [2024-11-19 20:57:52.749796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.100 [2024-11-19 20:57:52.749821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.100 [2024-11-19 20:57:52.749842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.100 [2024-11-19 20:57:52.749868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.749890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.749915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.749938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.749963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.749987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.750012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.750033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.750058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.750089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.750127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.750149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.750174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.750200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.750227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.750248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.750273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.750295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.750319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.750341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.750373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.750395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.750420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.750452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.750477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.750498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.750523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.750546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.750570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.750607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.750632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.750653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.750686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.750709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.750732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.750753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.750777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.750799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.750828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.750850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.750874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.750895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.750920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.750941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.750965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.750986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.751010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.751031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.751077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.751102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.751137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.751159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.751185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.751207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.751231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.751253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.751277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.751298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.751323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.751345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.751380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.751417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.751451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.751477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.751507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.751530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.751554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.751576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.751600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.751621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.751645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.751666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.751690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.751711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.751735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.751756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.751780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.751801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.101 [2024-11-19 20:57:52.751825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.101 [2024-11-19 20:57:52.751846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.102 [2024-11-19 20:57:52.751870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.102 [2024-11-19 20:57:52.751891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.102 [2024-11-19 20:57:52.751914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.102 [2024-11-19 20:57:52.751936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.102 [2024-11-19 20:57:52.751960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.102 [2024-11-19 20:57:52.751981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.102 [2024-11-19 20:57:52.752004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.102 [2024-11-19 20:57:52.752025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.102 [2024-11-19 20:57:52.752057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.102 [2024-11-19 20:57:52.752103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.102 [2024-11-19 20:57:52.752135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.102 [2024-11-19 20:57:52.752157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.102 [2024-11-19 20:57:52.752181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.102 [2024-11-19 20:57:52.752202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:19.102 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.102 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:19.102 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.102 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.102 [2024-11-19 20:57:52.753742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:19.102 task offset: 34176 on job bdev=Nvme0n1 fails 00:08:19.102 00:08:19.102 Latency(us) 00:08:19.102 [2024-11-19T19:57:52.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.102 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:19.102 Job: Nvme0n1 ended in about 0.21 seconds with error 00:08:19.102 Verification LBA range: start 0x0 length 0x400 00:08:19.102 Nvme0n1 : 0.21 1204.79 75.30 301.20 0.00 40280.86 4296.25 40972.14 00:08:19.102 [2024-11-19T19:57:52.897Z] =================================================================================================================== 00:08:19.102 [2024-11-19T19:57:52.897Z] Total : 1204.79 75.30 301.20 0.00 40280.86 4296.25 40972.14 00:08:19.102 [2024-11-19 20:57:52.758748] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:19.102 [2024-11-19 20:57:52.758801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:08:19.102 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.102 20:57:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:19.102 [2024-11-19 20:57:52.774851] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:20.039 20:57:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2880793 00:08:20.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2880793) - No such process 00:08:20.039 20:57:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:20.039 20:57:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:20.039 20:57:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:20.039 20:57:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:20.039 20:57:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:20.039 20:57:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:20.039 20:57:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:20.039 20:57:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:20.039 { 00:08:20.039 "params": { 00:08:20.039 "name": "Nvme$subsystem", 00:08:20.039 "trtype": "$TEST_TRANSPORT", 00:08:20.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:20.039 "adrfam": "ipv4", 00:08:20.039 "trsvcid": "$NVMF_PORT", 00:08:20.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:20.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:20.039 "hdgst": ${hdgst:-false}, 00:08:20.039 "ddgst": ${ddgst:-false} 00:08:20.039 }, 00:08:20.039 "method": "bdev_nvme_attach_controller" 00:08:20.039 } 00:08:20.039 EOF 00:08:20.039 )") 00:08:20.039 20:57:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:20.039 20:57:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:20.039 20:57:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:20.039 20:57:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:20.039 "params": { 00:08:20.039 "name": "Nvme0", 00:08:20.039 "trtype": "tcp", 00:08:20.039 "traddr": "10.0.0.2", 00:08:20.039 "adrfam": "ipv4", 00:08:20.039 "trsvcid": "4420", 00:08:20.039 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:20.039 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:20.039 "hdgst": false, 00:08:20.039 "ddgst": false 00:08:20.039 }, 00:08:20.039 "method": "bdev_nvme_attach_controller" 00:08:20.039 }' 00:08:20.299 [2024-11-19 20:57:53.848837] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:08:20.299 [2024-11-19 20:57:53.848971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881074 ] 00:08:20.299 [2024-11-19 20:57:53.984810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.557 [2024-11-19 20:57:54.115829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.816 Running I/O for 1 seconds... 00:08:22.191 1265.00 IOPS, 79.06 MiB/s 00:08:22.191 Latency(us) 00:08:22.191 [2024-11-19T19:57:55.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.191 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:22.191 Verification LBA range: start 0x0 length 0x400 00:08:22.191 Nvme0n1 : 1.04 1295.29 80.96 0.00 0.00 48595.27 7233.23 45244.11 00:08:22.191 [2024-11-19T19:57:55.986Z] =================================================================================================================== 00:08:22.191 [2024-11-19T19:57:55.986Z] Total : 1295.29 80.96 0.00 0.00 48595.27 7233.23 45244.11 00:08:22.758 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:22.758 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:22.758 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:22.758 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:22.758 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:22.758 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:22.758 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:22.758 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:22.758 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:22.758 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:22.759 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:22.759 rmmod nvme_tcp 00:08:22.759 rmmod nvme_fabrics 00:08:22.759 rmmod nvme_keyring 00:08:22.759 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:22.759 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:22.759 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:22.759 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2880618 ']' 00:08:22.759 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2880618 00:08:22.759 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2880618 ']' 00:08:22.759 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2880618 00:08:22.759 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:22.759 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:22.759 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2880618 00:08:22.759 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:22.759 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:22.759 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2880618' 00:08:22.759 killing process with pid 2880618 00:08:22.759 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2880618 00:08:22.759 20:57:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2880618 00:08:24.132 [2024-11-19 20:57:57.687090] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:24.132 20:57:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:24.132 20:57:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:24.132 20:57:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:24.132 20:57:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:24.132 20:57:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:24.132 20:57:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:24.132 20:57:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:24.132 20:57:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:24.132 20:57:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:24.132 20:57:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.132 20:57:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.132 20:57:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.045 20:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:26.045 20:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:26.305 00:08:26.305 real 0m11.614s 00:08:26.305 user 0m31.613s 00:08:26.305 sys 0m2.995s 00:08:26.305 20:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.305 20:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.305 ************************************ 00:08:26.305 END TEST nvmf_host_management 00:08:26.305 ************************************ 00:08:26.305 20:57:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:26.305 20:57:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:26.305 20:57:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.305 20:57:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:26.305 ************************************ 00:08:26.305 START TEST nvmf_lvol 00:08:26.305 ************************************ 00:08:26.305 20:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:26.305 * Looking for test storage... 00:08:26.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:26.305 20:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:26.305 20:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:26.305 20:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:26.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.305 --rc genhtml_branch_coverage=1 00:08:26.305 --rc genhtml_function_coverage=1 00:08:26.305 --rc genhtml_legend=1 00:08:26.305 --rc geninfo_all_blocks=1 00:08:26.305 --rc geninfo_unexecuted_blocks=1 00:08:26.305 00:08:26.305 ' 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:26.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.305 --rc genhtml_branch_coverage=1 00:08:26.305 --rc genhtml_function_coverage=1 00:08:26.305 --rc genhtml_legend=1 00:08:26.305 --rc geninfo_all_blocks=1 00:08:26.305 --rc geninfo_unexecuted_blocks=1 00:08:26.305 00:08:26.305 ' 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:26.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.305 --rc genhtml_branch_coverage=1 00:08:26.305 --rc genhtml_function_coverage=1 00:08:26.305 --rc genhtml_legend=1 00:08:26.305 --rc geninfo_all_blocks=1 00:08:26.305 --rc geninfo_unexecuted_blocks=1 00:08:26.305 00:08:26.305 ' 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:26.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.305 --rc genhtml_branch_coverage=1 00:08:26.305 --rc genhtml_function_coverage=1 00:08:26.305 --rc genhtml_legend=1 00:08:26.305 --rc geninfo_all_blocks=1 00:08:26.305 --rc geninfo_unexecuted_blocks=1 00:08:26.305 00:08:26.305 ' 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.305 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:26.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:26.306 20:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:28.845 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:28.845 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:28.845 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:28.845 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:28.845 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:28.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:08:28.846 00:08:28.846 --- 10.0.0.2 ping statistics --- 00:08:28.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.846 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:28.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:08:28.846 00:08:28.846 --- 10.0.0.1 ping statistics --- 00:08:28.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.846 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2883491 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2883491 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2883491 ']' 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.846 20:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:28.846 [2024-11-19 20:58:02.347196] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:08:28.846 [2024-11-19 20:58:02.347333] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.846 [2024-11-19 20:58:02.501822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:29.106 [2024-11-19 20:58:02.642948] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.106 [2024-11-19 20:58:02.643037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.106 [2024-11-19 20:58:02.643062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.106 [2024-11-19 20:58:02.643100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.106 [2024-11-19 20:58:02.643120] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.106 [2024-11-19 20:58:02.645874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.106 [2024-11-19 20:58:02.645937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.106 [2024-11-19 20:58:02.645942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.672 20:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.672 20:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:29.672 20:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:29.672 20:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:29.672 20:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:29.672 20:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.673 20:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:29.930 [2024-11-19 20:58:03.671472] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.930 20:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:30.496 20:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:30.496 20:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:30.755 20:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:30.755 20:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:31.013 20:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:31.270 20:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f02ad206-cdc1-438a-a0be-bc0e4cf892a3 00:08:31.270 20:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f02ad206-cdc1-438a-a0be-bc0e4cf892a3 lvol 20 00:08:31.528 20:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2cdb4164-a857-491f-87e2-d9943bd1e836 00:08:31.528 20:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:31.786 20:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2cdb4164-a857-491f-87e2-d9943bd1e836 00:08:32.044 20:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:32.303 [2024-11-19 20:58:06.065009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.303 20:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:32.869 20:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2883988 00:08:32.869 20:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:32.869 20:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:33.806 20:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2cdb4164-a857-491f-87e2-d9943bd1e836 MY_SNAPSHOT 00:08:34.066 20:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6d6f9f27-090b-460d-a0ef-bc893ecb57d8 00:08:34.066 20:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2cdb4164-a857-491f-87e2-d9943bd1e836 30 00:08:34.386 20:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6d6f9f27-090b-460d-a0ef-bc893ecb57d8 MY_CLONE 00:08:34.670 20:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=faa0c2a7-1866-4030-b055-118c9124697b 00:08:34.670 20:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate faa0c2a7-1866-4030-b055-118c9124697b 00:08:35.618 20:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2883988 00:08:43.744 Initializing NVMe Controllers 00:08:43.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:43.744 Controller IO queue size 128, less than required. 00:08:43.744 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:43.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:43.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:43.744 Initialization complete. Launching workers. 00:08:43.744 ======================================================== 00:08:43.744 Latency(us) 00:08:43.744 Device Information : IOPS MiB/s Average min max 00:08:43.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8102.80 31.65 15813.78 762.02 198293.92 00:08:43.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8000.00 31.25 16006.09 3338.97 145188.98 00:08:43.744 ======================================================== 00:08:43.744 Total : 16102.80 62.90 15909.32 762.02 198293.92 00:08:43.744 00:08:43.744 20:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:43.744 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2cdb4164-a857-491f-87e2-d9943bd1e836 00:08:44.004 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f02ad206-cdc1-438a-a0be-bc0e4cf892a3 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:44.262 rmmod nvme_tcp 00:08:44.262 rmmod nvme_fabrics 00:08:44.262 rmmod nvme_keyring 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2883491 ']' 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2883491 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2883491 ']' 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2883491 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2883491 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2883491' 00:08:44.262 killing process with pid 2883491 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2883491 00:08:44.262 20:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2883491 00:08:45.641 20:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:45.641 20:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:45.641 20:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:45.641 20:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:45.641 20:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:45.641 20:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:45.641 20:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:45.641 20:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:45.641 20:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:45.641 20:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.641 20:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.641 20:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.180 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:48.181 00:08:48.181 real 0m21.490s 00:08:48.181 user 1m11.905s 00:08:48.181 sys 0m5.352s 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:48.181 ************************************ 00:08:48.181 END TEST nvmf_lvol 00:08:48.181 ************************************ 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:48.181 ************************************ 00:08:48.181 START TEST nvmf_lvs_grow 00:08:48.181 ************************************ 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:48.181 * Looking for test storage... 00:08:48.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:48.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.181 --rc genhtml_branch_coverage=1 00:08:48.181 --rc genhtml_function_coverage=1 00:08:48.181 --rc genhtml_legend=1 00:08:48.181 --rc geninfo_all_blocks=1 00:08:48.181 --rc geninfo_unexecuted_blocks=1 00:08:48.181 00:08:48.181 ' 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:48.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.181 --rc genhtml_branch_coverage=1 00:08:48.181 --rc genhtml_function_coverage=1 00:08:48.181 --rc genhtml_legend=1 00:08:48.181 --rc geninfo_all_blocks=1 00:08:48.181 --rc geninfo_unexecuted_blocks=1 00:08:48.181 00:08:48.181 ' 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:48.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.181 --rc genhtml_branch_coverage=1 00:08:48.181 --rc genhtml_function_coverage=1 00:08:48.181 --rc genhtml_legend=1 00:08:48.181 --rc geninfo_all_blocks=1 00:08:48.181 --rc geninfo_unexecuted_blocks=1 00:08:48.181 00:08:48.181 ' 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:48.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.181 --rc genhtml_branch_coverage=1 00:08:48.181 --rc genhtml_function_coverage=1 00:08:48.181 --rc genhtml_legend=1 00:08:48.181 --rc geninfo_all_blocks=1 00:08:48.181 --rc geninfo_unexecuted_blocks=1 00:08:48.181 00:08:48.181 ' 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.181 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:48.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:48.182 20:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:50.086 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:50.087 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:50.087 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:50.087 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:50.087 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:50.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:08:50.087 00:08:50.087 --- 10.0.0.2 ping statistics --- 00:08:50.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.087 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:50.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:08:50.087 00:08:50.087 --- 10.0.0.1 ping statistics --- 00:08:50.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.087 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2887492 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2887492 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2887492 ']' 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:50.087 20:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.087 [2024-11-19 20:58:23.852255] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:08:50.087 [2024-11-19 20:58:23.852407] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.346 [2024-11-19 20:58:24.005406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.604 [2024-11-19 20:58:24.142558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.604 [2024-11-19 20:58:24.142645] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.604 [2024-11-19 20:58:24.142671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.604 [2024-11-19 20:58:24.142694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.604 [2024-11-19 20:58:24.142714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.604 [2024-11-19 20:58:24.144371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.170 20:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.170 20:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:51.170 20:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:51.170 20:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:51.170 20:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.170 20:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.170 20:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:51.428 [2024-11-19 20:58:25.080297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.428 20:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:51.428 20:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.428 20:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.428 20:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.428 ************************************ 00:08:51.428 START TEST lvs_grow_clean 00:08:51.428 ************************************ 00:08:51.428 20:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:51.428 20:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:51.428 20:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:51.428 20:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:51.428 20:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:51.428 20:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:51.428 20:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:51.428 20:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:51.428 20:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:51.428 20:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:51.686 20:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:51.686 20:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:52.252 20:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=cc867615-f04c-40ab-8b2a-9f9fdf4345d0 00:08:52.253 20:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc867615-f04c-40ab-8b2a-9f9fdf4345d0 00:08:52.253 20:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:52.253 20:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:52.253 20:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:52.253 20:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cc867615-f04c-40ab-8b2a-9f9fdf4345d0 lvol 150 00:08:52.511 20:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ba8cbb0d-d809-4de6-891e-89912db078b0 00:08:52.511 20:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:52.511 20:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:52.770 [2024-11-19 20:58:26.547212] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:52.770 [2024-11-19 20:58:26.547355] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:52.770 true 00:08:53.027 20:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc867615-f04c-40ab-8b2a-9f9fdf4345d0 00:08:53.027 20:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:53.285 20:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:53.285 20:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:53.544 20:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ba8cbb0d-d809-4de6-891e-89912db078b0 00:08:53.801 20:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:54.059 [2024-11-19 20:58:27.686882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.059 20:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:54.318 20:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2887977 00:08:54.318 20:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:54.318 20:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:54.318 20:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2887977 /var/tmp/bdevperf.sock 00:08:54.318 20:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2887977 ']' 00:08:54.318 20:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:54.318 20:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.318 20:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:54.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:54.318 20:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.318 20:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:54.318 [2024-11-19 20:58:28.058756] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:08:54.318 [2024-11-19 20:58:28.058900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887977 ] 00:08:54.577 [2024-11-19 20:58:28.202609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.577 [2024-11-19 20:58:28.340833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.512 20:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.512 20:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:55.512 20:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:55.770 Nvme0n1 00:08:55.770 20:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:56.029 [ 00:08:56.029 { 00:08:56.029 "name": "Nvme0n1", 00:08:56.029 "aliases": [ 00:08:56.029 "ba8cbb0d-d809-4de6-891e-89912db078b0" 00:08:56.029 ], 00:08:56.029 "product_name": "NVMe disk", 00:08:56.029 "block_size": 4096, 00:08:56.029 "num_blocks": 38912, 00:08:56.029 "uuid": "ba8cbb0d-d809-4de6-891e-89912db078b0", 00:08:56.029 "numa_id": 0, 00:08:56.029 "assigned_rate_limits": { 00:08:56.029 "rw_ios_per_sec": 0, 00:08:56.029 "rw_mbytes_per_sec": 0, 00:08:56.029 "r_mbytes_per_sec": 0, 00:08:56.029 "w_mbytes_per_sec": 0 00:08:56.029 }, 00:08:56.029 "claimed": false, 00:08:56.029 "zoned": false, 00:08:56.029 "supported_io_types": { 00:08:56.029 "read": true, 00:08:56.029 "write": true, 00:08:56.029 "unmap": true, 00:08:56.029 "flush": true, 00:08:56.029 "reset": true, 00:08:56.029 "nvme_admin": true, 00:08:56.029 "nvme_io": true, 00:08:56.029 "nvme_io_md": false, 00:08:56.029 "write_zeroes": true, 00:08:56.029 "zcopy": false, 00:08:56.029 "get_zone_info": false, 00:08:56.029 "zone_management": false, 00:08:56.029 "zone_append": false, 00:08:56.029 "compare": true, 00:08:56.029 "compare_and_write": true, 00:08:56.029 "abort": true, 00:08:56.029 "seek_hole": false, 00:08:56.029 "seek_data": false, 00:08:56.029 "copy": true, 00:08:56.029 "nvme_iov_md": false 00:08:56.029 }, 00:08:56.029 "memory_domains": [ 00:08:56.029 { 00:08:56.029 "dma_device_id": "system", 00:08:56.029 "dma_device_type": 1 00:08:56.029 } 00:08:56.029 ], 00:08:56.029 "driver_specific": { 00:08:56.029 "nvme": [ 00:08:56.029 { 00:08:56.029 "trid": { 00:08:56.029 "trtype": "TCP", 00:08:56.029 "adrfam": "IPv4", 00:08:56.029 "traddr": "10.0.0.2", 00:08:56.029 "trsvcid": "4420", 00:08:56.029 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:56.029 }, 00:08:56.029 "ctrlr_data": { 00:08:56.029 "cntlid": 1, 00:08:56.029 "vendor_id": "0x8086", 00:08:56.029 "model_number": "SPDK bdev Controller", 00:08:56.029 "serial_number": "SPDK0", 00:08:56.029 "firmware_revision": "25.01", 00:08:56.029 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:56.029 "oacs": { 00:08:56.029 "security": 0, 00:08:56.029 "format": 0, 00:08:56.029 "firmware": 0, 00:08:56.029 "ns_manage": 0 00:08:56.029 }, 00:08:56.029 "multi_ctrlr": true, 00:08:56.029 "ana_reporting": false 00:08:56.029 }, 00:08:56.029 "vs": { 00:08:56.029 "nvme_version": "1.3" 00:08:56.029 }, 00:08:56.029 "ns_data": { 00:08:56.029 "id": 1, 00:08:56.029 "can_share": true 00:08:56.029 } 00:08:56.029 } 00:08:56.029 ], 00:08:56.029 "mp_policy": "active_passive" 00:08:56.029 } 00:08:56.029 } 00:08:56.029 ] 00:08:56.029 20:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2888235 00:08:56.029 20:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:56.029 20:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:56.029 Running I/O for 10 seconds... 00:08:57.404 Latency(us) 00:08:57.404 [2024-11-19T19:58:31.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.404 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.404 Nvme0n1 : 1.00 10669.00 41.68 0.00 0.00 0.00 0.00 0.00 00:08:57.404 [2024-11-19T19:58:31.199Z] =================================================================================================================== 00:08:57.404 [2024-11-19T19:58:31.199Z] Total : 10669.00 41.68 0.00 0.00 0.00 0.00 0.00 00:08:57.404 00:08:57.971 20:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cc867615-f04c-40ab-8b2a-9f9fdf4345d0 00:08:58.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.229 Nvme0n1 : 2.00 10795.50 42.17 0.00 0.00 0.00 0.00 0.00 00:08:58.229 [2024-11-19T19:58:32.024Z] =================================================================================================================== 00:08:58.229 [2024-11-19T19:58:32.024Z] Total : 10795.50 42.17 0.00 0.00 0.00 0.00 0.00 00:08:58.229 00:08:58.487 true 00:08:58.487 20:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc867615-f04c-40ab-8b2a-9f9fdf4345d0 00:08:58.487 20:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:58.746 20:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:58.746 20:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:58.746 20:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2888235 00:08:59.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.313 Nvme0n1 : 3.00 10880.00 42.50 0.00 0.00 0.00 0.00 0.00 00:08:59.313 [2024-11-19T19:58:33.108Z] =================================================================================================================== 00:08:59.313 [2024-11-19T19:58:33.108Z] Total : 10880.00 42.50 0.00 0.00 0.00 0.00 0.00 00:08:59.313 00:09:00.250 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.250 Nvme0n1 : 4.00 10930.75 42.70 0.00 0.00 0.00 0.00 0.00 00:09:00.250 [2024-11-19T19:58:34.045Z] =================================================================================================================== 00:09:00.250 [2024-11-19T19:58:34.045Z] Total : 10930.75 42.70 0.00 0.00 0.00 0.00 0.00 00:09:00.250 00:09:01.183 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.183 Nvme0n1 : 5.00 10954.40 42.79 0.00 0.00 0.00 0.00 0.00 00:09:01.183 [2024-11-19T19:58:34.978Z] =================================================================================================================== 00:09:01.183 [2024-11-19T19:58:34.978Z] Total : 10954.40 42.79 0.00 0.00 0.00 0.00 0.00 00:09:01.183 00:09:02.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.118 Nvme0n1 : 6.00 10991.33 42.93 0.00 0.00 0.00 0.00 0.00 00:09:02.118 [2024-11-19T19:58:35.913Z] =================================================================================================================== 00:09:02.118 [2024-11-19T19:58:35.913Z] Total : 10991.33 42.93 0.00 0.00 0.00 0.00 0.00 00:09:02.118 00:09:03.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.052 Nvme0n1 : 7.00 11017.71 43.04 0.00 0.00 0.00 0.00 0.00 00:09:03.052 [2024-11-19T19:58:36.847Z] =================================================================================================================== 00:09:03.052 [2024-11-19T19:58:36.847Z] Total : 11017.71 43.04 0.00 0.00 0.00 0.00 0.00 00:09:03.052 00:09:04.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.426 Nvme0n1 : 8.00 11025.88 43.07 0.00 0.00 0.00 0.00 0.00 00:09:04.426 [2024-11-19T19:58:38.221Z] =================================================================================================================== 00:09:04.426 [2024-11-19T19:58:38.221Z] Total : 11025.88 43.07 0.00 0.00 0.00 0.00 0.00 00:09:04.426 00:09:05.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.361 Nvme0n1 : 9.00 11028.44 43.08 0.00 0.00 0.00 0.00 0.00 00:09:05.361 [2024-11-19T19:58:39.156Z] =================================================================================================================== 00:09:05.361 [2024-11-19T19:58:39.156Z] Total : 11028.44 43.08 0.00 0.00 0.00 0.00 0.00 00:09:05.361 00:09:06.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.296 Nvme0n1 : 10.00 11043.20 43.14 0.00 0.00 0.00 0.00 0.00 00:09:06.296 [2024-11-19T19:58:40.091Z] =================================================================================================================== 00:09:06.296 [2024-11-19T19:58:40.091Z] Total : 11043.20 43.14 0.00 0.00 0.00 0.00 0.00 00:09:06.296 00:09:06.296 00:09:06.296 Latency(us) 00:09:06.296 [2024-11-19T19:58:40.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.296 Nvme0n1 : 10.01 11043.09 43.14 0.00 0.00 11584.40 2597.17 22913.33 00:09:06.296 [2024-11-19T19:58:40.091Z] =================================================================================================================== 00:09:06.296 [2024-11-19T19:58:40.091Z] Total : 11043.09 43.14 0.00 0.00 11584.40 2597.17 22913.33 00:09:06.296 { 00:09:06.296 "results": [ 00:09:06.296 { 00:09:06.296 "job": "Nvme0n1", 00:09:06.296 "core_mask": "0x2", 00:09:06.296 "workload": "randwrite", 00:09:06.296 "status": "finished", 00:09:06.296 "queue_depth": 128, 00:09:06.296 "io_size": 4096, 00:09:06.296 "runtime": 10.011689, 00:09:06.296 "iops": 11043.091730076714, 00:09:06.296 "mibps": 43.137077070612165, 00:09:06.296 "io_failed": 0, 00:09:06.296 "io_timeout": 0, 00:09:06.296 "avg_latency_us": 11584.400472530418, 00:09:06.296 "min_latency_us": 2597.1674074074076, 00:09:06.296 "max_latency_us": 22913.327407407407 00:09:06.296 } 00:09:06.296 ], 00:09:06.296 "core_count": 1 00:09:06.296 } 00:09:06.296 20:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2887977 00:09:06.296 20:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2887977 ']' 00:09:06.296 20:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2887977 00:09:06.296 20:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:06.296 20:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.296 20:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2887977 00:09:06.296 20:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:06.296 20:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:06.296 20:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2887977' 00:09:06.296 killing process with pid 2887977 00:09:06.296 20:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2887977 00:09:06.296 Received shutdown signal, test time was about 10.000000 seconds 00:09:06.296 00:09:06.296 Latency(us) 00:09:06.296 [2024-11-19T19:58:40.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.296 [2024-11-19T19:58:40.091Z] =================================================================================================================== 00:09:06.296 [2024-11-19T19:58:40.091Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:06.296 20:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2887977 00:09:07.249 20:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:07.506 20:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:07.765 20:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc867615-f04c-40ab-8b2a-9f9fdf4345d0 00:09:07.765 20:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:08.023 20:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:08.023 20:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:08.023 20:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:08.281 [2024-11-19 20:58:41.967313] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:08.281 20:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc867615-f04c-40ab-8b2a-9f9fdf4345d0 00:09:08.281 20:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:08.281 20:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc867615-f04c-40ab-8b2a-9f9fdf4345d0 00:09:08.281 20:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.281 20:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.281 20:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.281 20:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.281 20:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.281 20:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.281 20:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.281 20:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:08.281 20:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc867615-f04c-40ab-8b2a-9f9fdf4345d0 00:09:08.540 request: 00:09:08.540 { 00:09:08.540 "uuid": "cc867615-f04c-40ab-8b2a-9f9fdf4345d0", 00:09:08.540 "method": "bdev_lvol_get_lvstores", 00:09:08.540 "req_id": 1 00:09:08.540 } 00:09:08.540 Got JSON-RPC error response 00:09:08.540 response: 00:09:08.540 { 00:09:08.540 "code": -19, 00:09:08.540 "message": "No such device" 00:09:08.540 } 00:09:08.540 20:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:08.540 20:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:08.540 20:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:08.540 20:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:08.540 20:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:08.799 aio_bdev 00:09:08.799 20:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ba8cbb0d-d809-4de6-891e-89912db078b0 00:09:08.799 20:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=ba8cbb0d-d809-4de6-891e-89912db078b0 00:09:08.799 20:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.799 20:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:08.799 20:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.799 20:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.799 20:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:09.364 20:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ba8cbb0d-d809-4de6-891e-89912db078b0 -t 2000 00:09:09.364 [ 00:09:09.364 { 00:09:09.364 "name": "ba8cbb0d-d809-4de6-891e-89912db078b0", 00:09:09.364 "aliases": [ 00:09:09.364 "lvs/lvol" 00:09:09.364 ], 00:09:09.364 "product_name": "Logical Volume", 00:09:09.364 "block_size": 4096, 00:09:09.364 "num_blocks": 38912, 00:09:09.364 "uuid": "ba8cbb0d-d809-4de6-891e-89912db078b0", 00:09:09.364 "assigned_rate_limits": { 00:09:09.364 "rw_ios_per_sec": 0, 00:09:09.364 "rw_mbytes_per_sec": 0, 00:09:09.364 "r_mbytes_per_sec": 0, 00:09:09.364 "w_mbytes_per_sec": 0 00:09:09.364 }, 00:09:09.364 "claimed": false, 00:09:09.364 "zoned": false, 00:09:09.364 "supported_io_types": { 00:09:09.364 "read": true, 00:09:09.364 "write": true, 00:09:09.364 "unmap": true, 00:09:09.364 "flush": false, 00:09:09.364 "reset": true, 00:09:09.364 "nvme_admin": false, 00:09:09.364 "nvme_io": false, 00:09:09.364 "nvme_io_md": false, 00:09:09.364 "write_zeroes": true, 00:09:09.364 "zcopy": false, 00:09:09.364 "get_zone_info": false, 00:09:09.364 "zone_management": false, 00:09:09.364 "zone_append": false, 00:09:09.364 "compare": false, 00:09:09.364 "compare_and_write": false, 00:09:09.364 "abort": false, 00:09:09.364 "seek_hole": true, 00:09:09.364 "seek_data": true, 00:09:09.364 "copy": false, 00:09:09.364 "nvme_iov_md": false 00:09:09.364 }, 00:09:09.364 "driver_specific": { 00:09:09.364 "lvol": { 00:09:09.364 "lvol_store_uuid": "cc867615-f04c-40ab-8b2a-9f9fdf4345d0", 00:09:09.364 "base_bdev": "aio_bdev", 00:09:09.364 "thin_provision": false, 00:09:09.364 "num_allocated_clusters": 38, 00:09:09.364 "snapshot": false, 00:09:09.364 "clone": false, 00:09:09.364 "esnap_clone": false 00:09:09.364 } 00:09:09.364 } 00:09:09.364 } 00:09:09.364 ] 00:09:09.623 20:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:09.623 20:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc867615-f04c-40ab-8b2a-9f9fdf4345d0 00:09:09.623 20:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:09.882 20:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:09.882 20:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc867615-f04c-40ab-8b2a-9f9fdf4345d0 00:09:09.882 20:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:10.140 20:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:10.140 20:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ba8cbb0d-d809-4de6-891e-89912db078b0 00:09:10.398 20:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cc867615-f04c-40ab-8b2a-9f9fdf4345d0 00:09:10.656 20:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:10.914 20:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.914 00:09:10.914 real 0m19.455s 00:09:10.914 user 0m19.169s 00:09:10.914 sys 0m1.993s 00:09:10.914 20:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.914 20:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:10.914 ************************************ 00:09:10.914 END TEST lvs_grow_clean 00:09:10.914 ************************************ 00:09:10.914 20:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:10.914 20:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:10.914 20:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.914 20:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:10.914 ************************************ 00:09:10.914 START TEST lvs_grow_dirty 00:09:10.914 ************************************ 00:09:10.914 20:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:10.914 20:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:10.914 20:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:10.914 20:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:10.914 20:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:10.915 20:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:10.915 20:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:10.915 20:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.915 20:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.915 20:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:11.482 20:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:11.482 20:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:11.740 20:58:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b3785a0c-1f0f-490c-94ef-089a9d484350 00:09:11.740 20:58:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3785a0c-1f0f-490c-94ef-089a9d484350 00:09:11.740 20:58:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:11.999 20:58:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:11.999 20:58:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:11.999 20:58:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b3785a0c-1f0f-490c-94ef-089a9d484350 lvol 150 00:09:12.257 20:58:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e9dda500-d27e-4266-ab71-a63679eca1ab 00:09:12.257 20:58:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:12.257 20:58:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:12.516 [2024-11-19 20:58:46.073958] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:12.516 [2024-11-19 20:58:46.074099] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:12.516 true 00:09:12.516 20:58:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3785a0c-1f0f-490c-94ef-089a9d484350 00:09:12.516 20:58:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:12.774 20:58:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:12.774 20:58:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:13.032 20:58:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e9dda500-d27e-4266-ab71-a63679eca1ab 00:09:13.290 20:58:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:13.548 [2024-11-19 20:58:47.165645] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.548 20:58:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:13.806 20:58:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2890315 00:09:13.806 20:58:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:13.806 20:58:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:13.806 20:58:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2890315 /var/tmp/bdevperf.sock 00:09:13.806 20:58:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2890315 ']' 00:09:13.806 20:58:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:13.806 20:58:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.806 20:58:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:13.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:13.806 20:58:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.806 20:58:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:13.806 [2024-11-19 20:58:47.541349] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:13.807 [2024-11-19 20:58:47.541518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2890315 ] 00:09:14.065 [2024-11-19 20:58:47.681554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.065 [2024-11-19 20:58:47.812742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.000 20:58:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.000 20:58:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:15.000 20:58:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:15.565 Nvme0n1 00:09:15.565 20:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:15.824 [ 00:09:15.824 { 00:09:15.824 "name": "Nvme0n1", 00:09:15.824 "aliases": [ 00:09:15.824 "e9dda500-d27e-4266-ab71-a63679eca1ab" 00:09:15.824 ], 00:09:15.824 "product_name": "NVMe disk", 00:09:15.824 "block_size": 4096, 00:09:15.824 "num_blocks": 38912, 00:09:15.824 "uuid": "e9dda500-d27e-4266-ab71-a63679eca1ab", 00:09:15.824 "numa_id": 0, 00:09:15.824 "assigned_rate_limits": { 00:09:15.824 "rw_ios_per_sec": 0, 00:09:15.824 "rw_mbytes_per_sec": 0, 00:09:15.824 "r_mbytes_per_sec": 0, 00:09:15.824 "w_mbytes_per_sec": 0 00:09:15.824 }, 00:09:15.824 "claimed": false, 00:09:15.824 "zoned": false, 00:09:15.824 "supported_io_types": { 00:09:15.824 "read": true, 00:09:15.824 "write": true, 00:09:15.824 "unmap": true, 00:09:15.824 "flush": true, 00:09:15.824 "reset": true, 00:09:15.824 "nvme_admin": true, 00:09:15.824 "nvme_io": true, 00:09:15.824 "nvme_io_md": false, 00:09:15.824 "write_zeroes": true, 00:09:15.824 "zcopy": false, 00:09:15.824 "get_zone_info": false, 00:09:15.824 "zone_management": false, 00:09:15.824 "zone_append": false, 00:09:15.824 "compare": true, 00:09:15.824 "compare_and_write": true, 00:09:15.824 "abort": true, 00:09:15.824 "seek_hole": false, 00:09:15.824 "seek_data": false, 00:09:15.824 "copy": true, 00:09:15.824 "nvme_iov_md": false 00:09:15.824 }, 00:09:15.824 "memory_domains": [ 00:09:15.824 { 00:09:15.824 "dma_device_id": "system", 00:09:15.824 "dma_device_type": 1 00:09:15.824 } 00:09:15.824 ], 00:09:15.824 "driver_specific": { 00:09:15.824 "nvme": [ 00:09:15.824 { 00:09:15.824 "trid": { 00:09:15.824 "trtype": "TCP", 00:09:15.824 "adrfam": "IPv4", 00:09:15.824 "traddr": "10.0.0.2", 00:09:15.824 "trsvcid": "4420", 00:09:15.824 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:15.824 }, 00:09:15.824 "ctrlr_data": { 00:09:15.824 "cntlid": 1, 00:09:15.824 "vendor_id": "0x8086", 00:09:15.824 "model_number": "SPDK bdev Controller", 00:09:15.824 "serial_number": "SPDK0", 00:09:15.824 "firmware_revision": "25.01", 00:09:15.824 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:15.824 "oacs": { 00:09:15.824 "security": 0, 00:09:15.824 "format": 0, 00:09:15.824 "firmware": 0, 00:09:15.824 "ns_manage": 0 00:09:15.824 }, 00:09:15.824 "multi_ctrlr": true, 00:09:15.824 "ana_reporting": false 00:09:15.824 }, 00:09:15.824 "vs": { 00:09:15.824 "nvme_version": "1.3" 00:09:15.824 }, 00:09:15.824 "ns_data": { 00:09:15.824 "id": 1, 00:09:15.824 "can_share": true 00:09:15.824 } 00:09:15.824 } 00:09:15.824 ], 00:09:15.824 "mp_policy": "active_passive" 00:09:15.824 } 00:09:15.824 } 00:09:15.824 ] 00:09:15.824 20:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2890569 00:09:15.824 20:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:15.824 20:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:15.824 Running I/O for 10 seconds... 00:09:16.760 Latency(us) 00:09:16.760 [2024-11-19T19:58:50.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.760 Nvme0n1 : 1.00 10415.00 40.68 0.00 0.00 0.00 0.00 0.00 00:09:16.760 [2024-11-19T19:58:50.555Z] =================================================================================================================== 00:09:16.760 [2024-11-19T19:58:50.555Z] Total : 10415.00 40.68 0.00 0.00 0.00 0.00 0.00 00:09:16.760 00:09:17.696 20:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b3785a0c-1f0f-490c-94ef-089a9d484350 00:09:17.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.955 Nvme0n1 : 2.00 10573.50 41.30 0.00 0.00 0.00 0.00 0.00 00:09:17.955 [2024-11-19T19:58:51.750Z] =================================================================================================================== 00:09:17.955 [2024-11-19T19:58:51.750Z] Total : 10573.50 41.30 0.00 0.00 0.00 0.00 0.00 00:09:17.955 00:09:17.955 true 00:09:17.955 20:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3785a0c-1f0f-490c-94ef-089a9d484350 00:09:17.955 20:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:18.213 20:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:18.213 20:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:18.213 20:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2890569 00:09:18.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.779 Nvme0n1 : 3.00 10668.33 41.67 0.00 0.00 0.00 0.00 0.00 00:09:18.779 [2024-11-19T19:58:52.574Z] =================================================================================================================== 00:09:18.779 [2024-11-19T19:58:52.574Z] Total : 10668.33 41.67 0.00 0.00 0.00 0.00 0.00 00:09:18.779 00:09:20.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.153 Nvme0n1 : 4.00 10731.75 41.92 0.00 0.00 0.00 0.00 0.00 00:09:20.153 [2024-11-19T19:58:53.948Z] =================================================================================================================== 00:09:20.153 [2024-11-19T19:58:53.948Z] Total : 10731.75 41.92 0.00 0.00 0.00 0.00 0.00 00:09:20.153 00:09:21.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.088 Nvme0n1 : 5.00 10769.80 42.07 0.00 0.00 0.00 0.00 0.00 00:09:21.088 [2024-11-19T19:58:54.883Z] =================================================================================================================== 00:09:21.088 [2024-11-19T19:58:54.883Z] Total : 10769.80 42.07 0.00 0.00 0.00 0.00 0.00 00:09:21.088 00:09:22.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.023 Nvme0n1 : 6.00 10816.33 42.25 0.00 0.00 0.00 0.00 0.00 00:09:22.023 [2024-11-19T19:58:55.818Z] =================================================================================================================== 00:09:22.023 [2024-11-19T19:58:55.818Z] Total : 10816.33 42.25 0.00 0.00 0.00 0.00 0.00 00:09:22.023 00:09:22.958 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.958 Nvme0n1 : 7.00 10849.57 42.38 0.00 0.00 0.00 0.00 0.00 00:09:22.958 [2024-11-19T19:58:56.753Z] =================================================================================================================== 00:09:22.958 [2024-11-19T19:58:56.753Z] Total : 10849.57 42.38 0.00 0.00 0.00 0.00 0.00 00:09:22.958 00:09:23.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.892 Nvme0n1 : 8.00 10878.75 42.50 0.00 0.00 0.00 0.00 0.00 00:09:23.892 [2024-11-19T19:58:57.687Z] =================================================================================================================== 00:09:23.892 [2024-11-19T19:58:57.687Z] Total : 10878.75 42.50 0.00 0.00 0.00 0.00 0.00 00:09:23.892 00:09:24.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.827 Nvme0n1 : 9.00 10925.89 42.68 0.00 0.00 0.00 0.00 0.00 00:09:24.827 [2024-11-19T19:58:58.622Z] =================================================================================================================== 00:09:24.827 [2024-11-19T19:58:58.622Z] Total : 10925.89 42.68 0.00 0.00 0.00 0.00 0.00 00:09:24.827 00:09:25.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.762 Nvme0n1 : 10.00 10963.60 42.83 0.00 0.00 0.00 0.00 0.00 00:09:25.762 [2024-11-19T19:58:59.557Z] =================================================================================================================== 00:09:25.762 [2024-11-19T19:58:59.557Z] Total : 10963.60 42.83 0.00 0.00 0.00 0.00 0.00 00:09:25.762 00:09:26.021 00:09:26.021 Latency(us) 00:09:26.021 [2024-11-19T19:58:59.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.021 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.021 Nvme0n1 : 10.01 10969.85 42.85 0.00 0.00 11661.98 2852.03 22719.15 00:09:26.021 [2024-11-19T19:58:59.816Z] =================================================================================================================== 00:09:26.021 [2024-11-19T19:58:59.816Z] Total : 10969.85 42.85 0.00 0.00 11661.98 2852.03 22719.15 00:09:26.021 { 00:09:26.021 "results": [ 00:09:26.021 { 00:09:26.021 "job": "Nvme0n1", 00:09:26.021 "core_mask": "0x2", 00:09:26.021 "workload": "randwrite", 00:09:26.021 "status": "finished", 00:09:26.021 "queue_depth": 128, 00:09:26.021 "io_size": 4096, 00:09:26.021 "runtime": 10.005969, 00:09:26.021 "iops": 10969.852095284325, 00:09:26.021 "mibps": 42.85098474720439, 00:09:26.021 "io_failed": 0, 00:09:26.021 "io_timeout": 0, 00:09:26.021 "avg_latency_us": 11661.982153225708, 00:09:26.021 "min_latency_us": 2852.0296296296297, 00:09:26.021 "max_latency_us": 22719.146666666667 00:09:26.021 } 00:09:26.021 ], 00:09:26.021 "core_count": 1 00:09:26.021 } 00:09:26.021 20:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2890315 00:09:26.021 20:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2890315 ']' 00:09:26.021 20:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2890315 00:09:26.021 20:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:26.021 20:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.021 20:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2890315 00:09:26.021 20:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:26.021 20:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:26.021 20:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2890315' 00:09:26.021 killing process with pid 2890315 00:09:26.021 20:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2890315 00:09:26.021 Received shutdown signal, test time was about 10.000000 seconds 00:09:26.021 00:09:26.021 Latency(us) 00:09:26.021 [2024-11-19T19:58:59.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.021 [2024-11-19T19:58:59.816Z] =================================================================================================================== 00:09:26.021 [2024-11-19T19:58:59.816Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:26.021 20:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2890315 00:09:26.956 20:59:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:27.214 20:59:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:27.472 20:59:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3785a0c-1f0f-490c-94ef-089a9d484350 00:09:27.472 20:59:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:27.730 20:59:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:27.730 20:59:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:27.730 20:59:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2887492 00:09:27.730 20:59:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2887492 00:09:27.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2887492 Killed "${NVMF_APP[@]}" "$@" 00:09:27.989 20:59:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:27.989 20:59:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:27.989 20:59:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:27.989 20:59:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:27.989 20:59:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:27.989 20:59:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2892032 00:09:27.989 20:59:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:27.989 20:59:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2892032 00:09:27.989 20:59:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2892032 ']' 00:09:27.989 20:59:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.989 20:59:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.989 20:59:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.989 20:59:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.989 20:59:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:27.989 [2024-11-19 20:59:01.621168] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:27.989 [2024-11-19 20:59:01.621330] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.989 [2024-11-19 20:59:01.775873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.247 [2024-11-19 20:59:01.908022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.247 [2024-11-19 20:59:01.908123] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.247 [2024-11-19 20:59:01.908150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.247 [2024-11-19 20:59:01.908174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.247 [2024-11-19 20:59:01.908194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.247 [2024-11-19 20:59:01.909802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.855 20:59:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.855 20:59:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:28.855 20:59:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:28.855 20:59:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:28.855 20:59:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:28.856 20:59:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.856 20:59:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:29.418 [2024-11-19 20:59:02.910504] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:29.418 [2024-11-19 20:59:02.910753] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:29.418 [2024-11-19 20:59:02.910837] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:29.418 20:59:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:29.418 20:59:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e9dda500-d27e-4266-ab71-a63679eca1ab 00:09:29.418 20:59:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e9dda500-d27e-4266-ab71-a63679eca1ab 00:09:29.418 20:59:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.418 20:59:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:29.418 20:59:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.418 20:59:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.418 20:59:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:29.676 20:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e9dda500-d27e-4266-ab71-a63679eca1ab -t 2000 00:09:29.933 [ 00:09:29.933 { 00:09:29.933 "name": "e9dda500-d27e-4266-ab71-a63679eca1ab", 00:09:29.933 "aliases": [ 00:09:29.933 "lvs/lvol" 00:09:29.933 ], 00:09:29.933 "product_name": "Logical Volume", 00:09:29.933 "block_size": 4096, 00:09:29.933 "num_blocks": 38912, 00:09:29.933 "uuid": "e9dda500-d27e-4266-ab71-a63679eca1ab", 00:09:29.933 "assigned_rate_limits": { 00:09:29.933 "rw_ios_per_sec": 0, 00:09:29.933 "rw_mbytes_per_sec": 0, 00:09:29.933 "r_mbytes_per_sec": 0, 00:09:29.933 "w_mbytes_per_sec": 0 00:09:29.933 }, 00:09:29.933 "claimed": false, 00:09:29.933 "zoned": false, 00:09:29.933 "supported_io_types": { 00:09:29.933 "read": true, 00:09:29.933 "write": true, 00:09:29.933 "unmap": true, 00:09:29.933 "flush": false, 00:09:29.933 "reset": true, 00:09:29.934 "nvme_admin": false, 00:09:29.934 "nvme_io": false, 00:09:29.934 "nvme_io_md": false, 00:09:29.934 "write_zeroes": true, 00:09:29.934 "zcopy": false, 00:09:29.934 "get_zone_info": false, 00:09:29.934 "zone_management": false, 00:09:29.934 "zone_append": false, 00:09:29.934 "compare": false, 00:09:29.934 "compare_and_write": false, 00:09:29.934 "abort": false, 00:09:29.934 "seek_hole": true, 00:09:29.934 "seek_data": true, 00:09:29.934 "copy": false, 00:09:29.934 "nvme_iov_md": false 00:09:29.934 }, 00:09:29.934 "driver_specific": { 00:09:29.934 "lvol": { 00:09:29.934 "lvol_store_uuid": "b3785a0c-1f0f-490c-94ef-089a9d484350", 00:09:29.934 "base_bdev": "aio_bdev", 00:09:29.934 "thin_provision": false, 00:09:29.934 "num_allocated_clusters": 38, 00:09:29.934 "snapshot": false, 00:09:29.934 "clone": false, 00:09:29.934 "esnap_clone": false 00:09:29.934 } 00:09:29.934 } 00:09:29.934 } 00:09:29.934 ] 00:09:29.934 20:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:29.934 20:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3785a0c-1f0f-490c-94ef-089a9d484350 00:09:29.934 20:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:30.191 20:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:30.192 20:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3785a0c-1f0f-490c-94ef-089a9d484350 00:09:30.192 20:59:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:30.448 20:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:30.448 20:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:30.706 [2024-11-19 20:59:04.435667] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:30.706 20:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3785a0c-1f0f-490c-94ef-089a9d484350 00:09:30.706 20:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:30.706 20:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3785a0c-1f0f-490c-94ef-089a9d484350 00:09:30.706 20:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:30.706 20:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.706 20:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:30.706 20:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.706 20:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:30.706 20:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.706 20:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:30.706 20:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:30.706 20:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3785a0c-1f0f-490c-94ef-089a9d484350 00:09:31.270 request: 00:09:31.270 { 00:09:31.270 "uuid": "b3785a0c-1f0f-490c-94ef-089a9d484350", 00:09:31.270 "method": "bdev_lvol_get_lvstores", 00:09:31.270 "req_id": 1 00:09:31.270 } 00:09:31.270 Got JSON-RPC error response 00:09:31.270 response: 00:09:31.270 { 00:09:31.270 "code": -19, 00:09:31.270 "message": "No such device" 00:09:31.270 } 00:09:31.270 20:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:31.270 20:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:31.270 20:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:31.270 20:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:31.270 20:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:31.528 aio_bdev 00:09:31.528 20:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e9dda500-d27e-4266-ab71-a63679eca1ab 00:09:31.528 20:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e9dda500-d27e-4266-ab71-a63679eca1ab 00:09:31.528 20:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:31.528 20:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:31.528 20:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:31.528 20:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:31.528 20:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:31.786 20:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e9dda500-d27e-4266-ab71-a63679eca1ab -t 2000 00:09:32.046 [ 00:09:32.046 { 00:09:32.046 "name": "e9dda500-d27e-4266-ab71-a63679eca1ab", 00:09:32.046 "aliases": [ 00:09:32.046 "lvs/lvol" 00:09:32.046 ], 00:09:32.046 "product_name": "Logical Volume", 00:09:32.046 "block_size": 4096, 00:09:32.046 "num_blocks": 38912, 00:09:32.046 "uuid": "e9dda500-d27e-4266-ab71-a63679eca1ab", 00:09:32.046 "assigned_rate_limits": { 00:09:32.046 "rw_ios_per_sec": 0, 00:09:32.046 "rw_mbytes_per_sec": 0, 00:09:32.046 "r_mbytes_per_sec": 0, 00:09:32.046 "w_mbytes_per_sec": 0 00:09:32.046 }, 00:09:32.046 "claimed": false, 00:09:32.046 "zoned": false, 00:09:32.046 "supported_io_types": { 00:09:32.046 "read": true, 00:09:32.046 "write": true, 00:09:32.046 "unmap": true, 00:09:32.046 "flush": false, 00:09:32.046 "reset": true, 00:09:32.046 "nvme_admin": false, 00:09:32.046 "nvme_io": false, 00:09:32.046 "nvme_io_md": false, 00:09:32.046 "write_zeroes": true, 00:09:32.046 "zcopy": false, 00:09:32.046 "get_zone_info": false, 00:09:32.046 "zone_management": false, 00:09:32.046 "zone_append": false, 00:09:32.046 "compare": false, 00:09:32.046 "compare_and_write": false, 00:09:32.046 "abort": false, 00:09:32.046 "seek_hole": true, 00:09:32.046 "seek_data": true, 00:09:32.046 "copy": false, 00:09:32.046 "nvme_iov_md": false 00:09:32.046 }, 00:09:32.046 "driver_specific": { 00:09:32.046 "lvol": { 00:09:32.046 "lvol_store_uuid": "b3785a0c-1f0f-490c-94ef-089a9d484350", 00:09:32.046 "base_bdev": "aio_bdev", 00:09:32.046 "thin_provision": false, 00:09:32.046 "num_allocated_clusters": 38, 00:09:32.046 "snapshot": false, 00:09:32.046 "clone": false, 00:09:32.046 "esnap_clone": false 00:09:32.046 } 00:09:32.046 } 00:09:32.046 } 00:09:32.046 ] 00:09:32.046 20:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:32.046 20:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3785a0c-1f0f-490c-94ef-089a9d484350 00:09:32.046 20:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:32.339 20:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:32.339 20:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3785a0c-1f0f-490c-94ef-089a9d484350 00:09:32.339 20:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:32.625 20:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:32.625 20:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e9dda500-d27e-4266-ab71-a63679eca1ab 00:09:32.883 20:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b3785a0c-1f0f-490c-94ef-089a9d484350 00:09:33.141 20:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:33.706 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:33.707 00:09:33.707 real 0m22.579s 00:09:33.707 user 0m56.302s 00:09:33.707 sys 0m4.742s 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:33.707 ************************************ 00:09:33.707 END TEST lvs_grow_dirty 00:09:33.707 ************************************ 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:33.707 nvmf_trace.0 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:33.707 rmmod nvme_tcp 00:09:33.707 rmmod nvme_fabrics 00:09:33.707 rmmod nvme_keyring 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2892032 ']' 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2892032 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2892032 ']' 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2892032 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2892032 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2892032' 00:09:33.707 killing process with pid 2892032 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2892032 00:09:33.707 20:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2892032 00:09:35.082 20:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:35.082 20:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:35.082 20:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:35.082 20:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:35.082 20:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:35.082 20:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:35.082 20:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:35.082 20:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:35.082 20:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:35.082 20:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.082 20:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.082 20:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:36.986 00:09:36.986 real 0m49.121s 00:09:36.986 user 1m23.675s 00:09:36.986 sys 0m8.837s 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:36.986 ************************************ 00:09:36.986 END TEST nvmf_lvs_grow 00:09:36.986 ************************************ 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.986 ************************************ 00:09:36.986 START TEST nvmf_bdev_io_wait 00:09:36.986 ************************************ 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:36.986 * Looking for test storage... 00:09:36.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:36.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.986 --rc genhtml_branch_coverage=1 00:09:36.986 --rc genhtml_function_coverage=1 00:09:36.986 --rc genhtml_legend=1 00:09:36.986 --rc geninfo_all_blocks=1 00:09:36.986 --rc geninfo_unexecuted_blocks=1 00:09:36.986 00:09:36.986 ' 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:36.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.986 --rc genhtml_branch_coverage=1 00:09:36.986 --rc genhtml_function_coverage=1 00:09:36.986 --rc genhtml_legend=1 00:09:36.986 --rc geninfo_all_blocks=1 00:09:36.986 --rc geninfo_unexecuted_blocks=1 00:09:36.986 00:09:36.986 ' 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:36.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.986 --rc genhtml_branch_coverage=1 00:09:36.986 --rc genhtml_function_coverage=1 00:09:36.986 --rc genhtml_legend=1 00:09:36.986 --rc geninfo_all_blocks=1 00:09:36.986 --rc geninfo_unexecuted_blocks=1 00:09:36.986 00:09:36.986 ' 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:36.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.986 --rc genhtml_branch_coverage=1 00:09:36.986 --rc genhtml_function_coverage=1 00:09:36.986 --rc genhtml_legend=1 00:09:36.986 --rc geninfo_all_blocks=1 00:09:36.986 --rc geninfo_unexecuted_blocks=1 00:09:36.986 00:09:36.986 ' 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.986 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:36.987 20:59:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:39.523 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:39.523 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:39.523 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:39.523 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:39.523 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:39.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:39.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:09:39.524 00:09:39.524 --- 10.0.0.2 ping statistics --- 00:09:39.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.524 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:39.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:39.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:09:39.524 00:09:39.524 --- 10.0.0.1 ping statistics --- 00:09:39.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.524 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2894840 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2894840 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2894840 ']' 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.524 20:59:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.524 [2024-11-19 20:59:13.091534] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:39.524 [2024-11-19 20:59:13.091676] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.524 [2024-11-19 20:59:13.243368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:39.782 [2024-11-19 20:59:13.390280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.782 [2024-11-19 20:59:13.390365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.782 [2024-11-19 20:59:13.390387] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.782 [2024-11-19 20:59:13.390407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.782 [2024-11-19 20:59:13.390443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.782 [2024-11-19 20:59:13.393289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.782 [2024-11-19 20:59:13.393345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.782 [2024-11-19 20:59:13.393453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.782 [2024-11-19 20:59:13.393459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.349 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.349 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:40.349 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:40.349 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:40.349 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.349 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.349 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:40.349 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.349 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.349 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.349 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:40.349 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.349 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.607 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.607 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:40.607 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.607 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.607 [2024-11-19 20:59:14.350436] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.607 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.607 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:40.607 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.607 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.865 Malloc0 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.866 [2024-11-19 20:59:14.457678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2894999 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2895000 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2895003 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:40.866 { 00:09:40.866 "params": { 00:09:40.866 "name": "Nvme$subsystem", 00:09:40.866 "trtype": "$TEST_TRANSPORT", 00:09:40.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:40.866 "adrfam": "ipv4", 00:09:40.866 "trsvcid": "$NVMF_PORT", 00:09:40.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:40.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:40.866 "hdgst": ${hdgst:-false}, 00:09:40.866 "ddgst": ${ddgst:-false} 00:09:40.866 }, 00:09:40.866 "method": "bdev_nvme_attach_controller" 00:09:40.866 } 00:09:40.866 EOF 00:09:40.866 )") 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:40.866 { 00:09:40.866 "params": { 00:09:40.866 "name": "Nvme$subsystem", 00:09:40.866 "trtype": "$TEST_TRANSPORT", 00:09:40.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:40.866 "adrfam": "ipv4", 00:09:40.866 "trsvcid": "$NVMF_PORT", 00:09:40.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:40.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:40.866 "hdgst": ${hdgst:-false}, 00:09:40.866 "ddgst": ${ddgst:-false} 00:09:40.866 }, 00:09:40.866 "method": "bdev_nvme_attach_controller" 00:09:40.866 } 00:09:40.866 EOF 00:09:40.866 )") 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2895005 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:40.866 { 00:09:40.866 "params": { 00:09:40.866 "name": "Nvme$subsystem", 00:09:40.866 "trtype": "$TEST_TRANSPORT", 00:09:40.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:40.866 "adrfam": "ipv4", 00:09:40.866 "trsvcid": "$NVMF_PORT", 00:09:40.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:40.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:40.866 "hdgst": ${hdgst:-false}, 00:09:40.866 "ddgst": ${ddgst:-false} 00:09:40.866 }, 00:09:40.866 "method": "bdev_nvme_attach_controller" 00:09:40.866 } 00:09:40.866 EOF 00:09:40.866 )") 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:40.866 { 00:09:40.866 "params": { 00:09:40.866 "name": "Nvme$subsystem", 00:09:40.866 "trtype": "$TEST_TRANSPORT", 00:09:40.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:40.866 "adrfam": "ipv4", 00:09:40.866 "trsvcid": "$NVMF_PORT", 00:09:40.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:40.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:40.866 "hdgst": ${hdgst:-false}, 00:09:40.866 "ddgst": ${ddgst:-false} 00:09:40.866 }, 00:09:40.866 "method": "bdev_nvme_attach_controller" 00:09:40.866 } 00:09:40.866 EOF 00:09:40.866 )") 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2894999 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:40.866 "params": { 00:09:40.866 "name": "Nvme1", 00:09:40.866 "trtype": "tcp", 00:09:40.866 "traddr": "10.0.0.2", 00:09:40.866 "adrfam": "ipv4", 00:09:40.866 "trsvcid": "4420", 00:09:40.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:40.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:40.866 "hdgst": false, 00:09:40.866 "ddgst": false 00:09:40.866 }, 00:09:40.866 "method": "bdev_nvme_attach_controller" 00:09:40.866 }' 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:40.866 "params": { 00:09:40.866 "name": "Nvme1", 00:09:40.866 "trtype": "tcp", 00:09:40.866 "traddr": "10.0.0.2", 00:09:40.866 "adrfam": "ipv4", 00:09:40.866 "trsvcid": "4420", 00:09:40.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:40.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:40.866 "hdgst": false, 00:09:40.866 "ddgst": false 00:09:40.866 }, 00:09:40.866 "method": "bdev_nvme_attach_controller" 00:09:40.866 }' 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:40.866 "params": { 00:09:40.866 "name": "Nvme1", 00:09:40.866 "trtype": "tcp", 00:09:40.866 "traddr": "10.0.0.2", 00:09:40.866 "adrfam": "ipv4", 00:09:40.866 "trsvcid": "4420", 00:09:40.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:40.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:40.866 "hdgst": false, 00:09:40.866 "ddgst": false 00:09:40.866 }, 00:09:40.866 "method": "bdev_nvme_attach_controller" 00:09:40.866 }' 00:09:40.866 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:40.867 20:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:40.867 "params": { 00:09:40.867 "name": "Nvme1", 00:09:40.867 "trtype": "tcp", 00:09:40.867 "traddr": "10.0.0.2", 00:09:40.867 "adrfam": "ipv4", 00:09:40.867 "trsvcid": "4420", 00:09:40.867 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:40.867 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:40.867 "hdgst": false, 00:09:40.867 "ddgst": false 00:09:40.867 }, 00:09:40.867 "method": "bdev_nvme_attach_controller" 00:09:40.867 }' 00:09:40.867 [2024-11-19 20:59:14.549181] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:40.867 [2024-11-19 20:59:14.549176] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:40.867 [2024-11-19 20:59:14.549176] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:40.867 [2024-11-19 20:59:14.549320] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 20:59:14.549321] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 20:59:14.549323] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:40.867 --proc-type=auto ] 00:09:40.867 --proc-type=auto ] 00:09:40.867 [2024-11-19 20:59:14.550237] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:40.867 [2024-11-19 20:59:14.550375] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:41.125 [2024-11-19 20:59:14.811635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.125 [2024-11-19 20:59:14.911578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.382 [2024-11-19 20:59:14.934402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:41.382 [2024-11-19 20:59:15.012513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.382 [2024-11-19 20:59:15.035915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:41.382 [2024-11-19 20:59:15.089797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.382 [2024-11-19 20:59:15.136659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:41.640 [2024-11-19 20:59:15.206670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:41.899 Running I/O for 1 seconds... 00:09:41.899 Running I/O for 1 seconds... 00:09:41.899 Running I/O for 1 seconds... 00:09:41.899 Running I/O for 1 seconds... 00:09:42.834 8202.00 IOPS, 32.04 MiB/s 00:09:42.834 Latency(us) 00:09:42.834 [2024-11-19T19:59:16.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.834 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:42.834 Nvme1n1 : 1.01 8243.37 32.20 0.00 0.00 15439.81 8446.86 22039.51 00:09:42.834 [2024-11-19T19:59:16.629Z] =================================================================================================================== 00:09:42.834 [2024-11-19T19:59:16.629Z] Total : 8243.37 32.20 0.00 0.00 15439.81 8446.86 22039.51 00:09:42.834 158032.00 IOPS, 617.31 MiB/s 00:09:42.834 Latency(us) 00:09:42.834 [2024-11-19T19:59:16.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.834 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:42.834 Nvme1n1 : 1.00 157712.27 616.06 0.00 0.00 807.43 361.05 2002.49 00:09:42.834 [2024-11-19T19:59:16.629Z] =================================================================================================================== 00:09:42.834 [2024-11-19T19:59:16.629Z] Total : 157712.27 616.06 0.00 0.00 807.43 361.05 2002.49 00:09:42.834 6290.00 IOPS, 24.57 MiB/s 00:09:42.834 Latency(us) 00:09:42.834 [2024-11-19T19:59:16.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.834 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:42.834 Nvme1n1 : 1.01 6346.66 24.79 0.00 0.00 20040.95 5631.24 32234.00 00:09:42.834 [2024-11-19T19:59:16.629Z] =================================================================================================================== 00:09:42.834 [2024-11-19T19:59:16.629Z] Total : 6346.66 24.79 0.00 0.00 20040.95 5631.24 32234.00 00:09:43.093 7287.00 IOPS, 28.46 MiB/s 00:09:43.093 Latency(us) 00:09:43.093 [2024-11-19T19:59:16.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.093 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:43.093 Nvme1n1 : 1.01 7349.99 28.71 0.00 0.00 17322.26 4296.25 26020.22 00:09:43.093 [2024-11-19T19:59:16.888Z] =================================================================================================================== 00:09:43.093 [2024-11-19T19:59:16.888Z] Total : 7349.99 28.71 0.00 0.00 17322.26 4296.25 26020.22 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2895000 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2895003 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2895005 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:43.659 rmmod nvme_tcp 00:09:43.659 rmmod nvme_fabrics 00:09:43.659 rmmod nvme_keyring 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2894840 ']' 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2894840 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2894840 ']' 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2894840 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2894840 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2894840' 00:09:43.659 killing process with pid 2894840 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2894840 00:09:43.659 20:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2894840 00:09:45.037 20:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:45.038 20:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:45.038 20:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:45.038 20:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:45.038 20:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:45.038 20:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:45.038 20:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:45.038 20:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:45.038 20:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:45.038 20:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.038 20:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.038 20:59:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:46.943 00:09:46.943 real 0m9.863s 00:09:46.943 user 0m27.716s 00:09:46.943 sys 0m4.305s 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:46.943 ************************************ 00:09:46.943 END TEST nvmf_bdev_io_wait 00:09:46.943 ************************************ 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.943 ************************************ 00:09:46.943 START TEST nvmf_queue_depth 00:09:46.943 ************************************ 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:46.943 * Looking for test storage... 00:09:46.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:46.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.943 --rc genhtml_branch_coverage=1 00:09:46.943 --rc genhtml_function_coverage=1 00:09:46.943 --rc genhtml_legend=1 00:09:46.943 --rc geninfo_all_blocks=1 00:09:46.943 --rc geninfo_unexecuted_blocks=1 00:09:46.943 00:09:46.943 ' 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:46.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.943 --rc genhtml_branch_coverage=1 00:09:46.943 --rc genhtml_function_coverage=1 00:09:46.943 --rc genhtml_legend=1 00:09:46.943 --rc geninfo_all_blocks=1 00:09:46.943 --rc geninfo_unexecuted_blocks=1 00:09:46.943 00:09:46.943 ' 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:46.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.943 --rc genhtml_branch_coverage=1 00:09:46.943 --rc genhtml_function_coverage=1 00:09:46.943 --rc genhtml_legend=1 00:09:46.943 --rc geninfo_all_blocks=1 00:09:46.943 --rc geninfo_unexecuted_blocks=1 00:09:46.943 00:09:46.943 ' 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:46.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.943 --rc genhtml_branch_coverage=1 00:09:46.943 --rc genhtml_function_coverage=1 00:09:46.943 --rc genhtml_legend=1 00:09:46.943 --rc geninfo_all_blocks=1 00:09:46.943 --rc geninfo_unexecuted_blocks=1 00:09:46.943 00:09:46.943 ' 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.943 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:46.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:46.944 20:59:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:48.848 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:48.848 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:48.848 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:48.848 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:48.848 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:49.107 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.107 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.107 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:49.107 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:49.107 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:49.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:09:49.108 00:09:49.108 --- 10.0.0.2 ping statistics --- 00:09:49.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.108 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:09:49.108 00:09:49.108 --- 10.0.0.1 ping statistics --- 00:09:49.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.108 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2897495 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2897495 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2897495 ']' 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.108 20:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.366 [2024-11-19 20:59:22.906647] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:49.366 [2024-11-19 20:59:22.906786] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.366 [2024-11-19 20:59:23.062621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.625 [2024-11-19 20:59:23.199990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.625 [2024-11-19 20:59:23.200081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.625 [2024-11-19 20:59:23.200108] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.625 [2024-11-19 20:59:23.200133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.625 [2024-11-19 20:59:23.200152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.625 [2024-11-19 20:59:23.201768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.191 20:59:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.191 20:59:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:50.191 20:59:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:50.191 20:59:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:50.191 20:59:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.191 20:59:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.191 20:59:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:50.191 20:59:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.191 20:59:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.191 [2024-11-19 20:59:23.895562] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:50.191 20:59:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.191 20:59:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:50.191 20:59:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.191 20:59:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.449 Malloc0 00:09:50.449 20:59:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.449 20:59:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:50.449 20:59:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.449 20:59:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.449 20:59:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.449 20:59:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:50.449 20:59:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.449 20:59:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.449 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.449 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:50.449 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.449 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.449 [2024-11-19 20:59:24.011816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.449 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.449 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2897650 00:09:50.449 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:50.449 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:50.449 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2897650 /var/tmp/bdevperf.sock 00:09:50.449 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2897650 ']' 00:09:50.449 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:50.449 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.449 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:50.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:50.449 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.449 20:59:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.449 [2024-11-19 20:59:24.106834] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:09:50.449 [2024-11-19 20:59:24.106983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897650 ] 00:09:50.708 [2024-11-19 20:59:24.264021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.708 [2024-11-19 20:59:24.403550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.275 20:59:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.276 20:59:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:51.276 20:59:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:51.276 20:59:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.276 20:59:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.533 NVMe0n1 00:09:51.534 20:59:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.534 20:59:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:51.791 Running I/O for 10 seconds... 00:09:53.661 5703.00 IOPS, 22.28 MiB/s [2024-11-19T19:59:28.390Z] 5847.50 IOPS, 22.84 MiB/s [2024-11-19T19:59:29.766Z] 5899.67 IOPS, 23.05 MiB/s [2024-11-19T19:59:30.701Z] 5905.25 IOPS, 23.07 MiB/s [2024-11-19T19:59:31.635Z] 5942.40 IOPS, 23.21 MiB/s [2024-11-19T19:59:32.570Z] 5969.17 IOPS, 23.32 MiB/s [2024-11-19T19:59:33.506Z] 5992.71 IOPS, 23.41 MiB/s [2024-11-19T19:59:34.444Z] 6010.75 IOPS, 23.48 MiB/s [2024-11-19T19:59:35.433Z] 6023.44 IOPS, 23.53 MiB/s [2024-11-19T19:59:35.691Z] 6034.40 IOPS, 23.57 MiB/s 00:10:01.896 Latency(us) 00:10:01.896 [2024-11-19T19:59:35.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.896 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:01.896 Verification LBA range: start 0x0 length 0x4000 00:10:01.897 NVMe0n1 : 10.15 6043.01 23.61 0.00 0.00 168539.40 27573.67 100197.26 00:10:01.897 [2024-11-19T19:59:35.692Z] =================================================================================================================== 00:10:01.897 [2024-11-19T19:59:35.692Z] Total : 6043.01 23.61 0.00 0.00 168539.40 27573.67 100197.26 00:10:01.897 { 00:10:01.897 "results": [ 00:10:01.897 { 00:10:01.897 "job": "NVMe0n1", 00:10:01.897 "core_mask": "0x1", 00:10:01.897 "workload": "verify", 00:10:01.897 "status": "finished", 00:10:01.897 "verify_range": { 00:10:01.897 "start": 0, 00:10:01.897 "length": 16384 00:10:01.897 }, 00:10:01.897 "queue_depth": 1024, 00:10:01.897 "io_size": 4096, 00:10:01.897 "runtime": 10.154867, 00:10:01.897 "iops": 6043.013660346314, 00:10:01.897 "mibps": 23.60552211072779, 00:10:01.897 "io_failed": 0, 00:10:01.897 "io_timeout": 0, 00:10:01.897 "avg_latency_us": 168539.39954087255, 00:10:01.897 "min_latency_us": 27573.665185185186, 00:10:01.897 "max_latency_us": 100197.26222222223 00:10:01.897 } 00:10:01.897 ], 00:10:01.897 "core_count": 1 00:10:01.897 } 00:10:01.897 20:59:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2897650 00:10:01.897 20:59:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2897650 ']' 00:10:01.897 20:59:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2897650 00:10:01.897 20:59:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:01.897 20:59:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.897 20:59:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2897650 00:10:01.897 20:59:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:01.897 20:59:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:01.897 20:59:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2897650' 00:10:01.897 killing process with pid 2897650 00:10:01.897 20:59:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2897650 00:10:01.897 Received shutdown signal, test time was about 10.000000 seconds 00:10:01.897 00:10:01.897 Latency(us) 00:10:01.897 [2024-11-19T19:59:35.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.897 [2024-11-19T19:59:35.692Z] =================================================================================================================== 00:10:01.897 [2024-11-19T19:59:35.692Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:01.897 20:59:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2897650 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:02.832 rmmod nvme_tcp 00:10:02.832 rmmod nvme_fabrics 00:10:02.832 rmmod nvme_keyring 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2897495 ']' 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2897495 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2897495 ']' 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2897495 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2897495 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2897495' 00:10:02.832 killing process with pid 2897495 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2897495 00:10:02.832 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2897495 00:10:04.207 20:59:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:04.207 20:59:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:04.207 20:59:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:04.207 20:59:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:04.207 20:59:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:04.207 20:59:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:04.207 20:59:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:04.207 20:59:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:04.207 20:59:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:04.207 20:59:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.207 20:59:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.207 20:59:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:06.757 00:10:06.757 real 0m19.489s 00:10:06.757 user 0m27.842s 00:10:06.757 sys 0m3.263s 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:06.757 ************************************ 00:10:06.757 END TEST nvmf_queue_depth 00:10:06.757 ************************************ 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:06.757 ************************************ 00:10:06.757 START TEST nvmf_target_multipath 00:10:06.757 ************************************ 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:06.757 * Looking for test storage... 00:10:06.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:06.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.757 --rc genhtml_branch_coverage=1 00:10:06.757 --rc genhtml_function_coverage=1 00:10:06.757 --rc genhtml_legend=1 00:10:06.757 --rc geninfo_all_blocks=1 00:10:06.757 --rc geninfo_unexecuted_blocks=1 00:10:06.757 00:10:06.757 ' 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:06.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.757 --rc genhtml_branch_coverage=1 00:10:06.757 --rc genhtml_function_coverage=1 00:10:06.757 --rc genhtml_legend=1 00:10:06.757 --rc geninfo_all_blocks=1 00:10:06.757 --rc geninfo_unexecuted_blocks=1 00:10:06.757 00:10:06.757 ' 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:06.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.757 --rc genhtml_branch_coverage=1 00:10:06.757 --rc genhtml_function_coverage=1 00:10:06.757 --rc genhtml_legend=1 00:10:06.757 --rc geninfo_all_blocks=1 00:10:06.757 --rc geninfo_unexecuted_blocks=1 00:10:06.757 00:10:06.757 ' 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:06.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.757 --rc genhtml_branch_coverage=1 00:10:06.757 --rc genhtml_function_coverage=1 00:10:06.757 --rc genhtml_legend=1 00:10:06.757 --rc geninfo_all_blocks=1 00:10:06.757 --rc geninfo_unexecuted_blocks=1 00:10:06.757 00:10:06.757 ' 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.757 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:06.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:06.758 20:59:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.660 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:08.661 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:08.661 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:08.661 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:08.661 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:08.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:10:08.661 00:10:08.661 --- 10.0.0.2 ping statistics --- 00:10:08.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.661 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:10:08.661 00:10:08.661 --- 10.0.0.1 ping statistics --- 00:10:08.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.661 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:08.661 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:08.662 only one NIC for nvmf test 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.662 rmmod nvme_tcp 00:10:08.662 rmmod nvme_fabrics 00:10:08.662 rmmod nvme_keyring 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:08.662 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:08.921 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.921 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:08.921 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.921 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.921 20:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.827 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.827 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:10.827 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:10.827 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:10.827 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:10.827 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:10.827 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:10.827 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.828 00:10:10.828 real 0m4.465s 00:10:10.828 user 0m0.926s 00:10:10.828 sys 0m1.558s 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:10.828 ************************************ 00:10:10.828 END TEST nvmf_target_multipath 00:10:10.828 ************************************ 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.828 ************************************ 00:10:10.828 START TEST nvmf_zcopy 00:10:10.828 ************************************ 00:10:10.828 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:11.087 * Looking for test storage... 00:10:11.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:11.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.087 --rc genhtml_branch_coverage=1 00:10:11.087 --rc genhtml_function_coverage=1 00:10:11.087 --rc genhtml_legend=1 00:10:11.087 --rc geninfo_all_blocks=1 00:10:11.087 --rc geninfo_unexecuted_blocks=1 00:10:11.087 00:10:11.087 ' 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:11.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.087 --rc genhtml_branch_coverage=1 00:10:11.087 --rc genhtml_function_coverage=1 00:10:11.087 --rc genhtml_legend=1 00:10:11.087 --rc geninfo_all_blocks=1 00:10:11.087 --rc geninfo_unexecuted_blocks=1 00:10:11.087 00:10:11.087 ' 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:11.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.087 --rc genhtml_branch_coverage=1 00:10:11.087 --rc genhtml_function_coverage=1 00:10:11.087 --rc genhtml_legend=1 00:10:11.087 --rc geninfo_all_blocks=1 00:10:11.087 --rc geninfo_unexecuted_blocks=1 00:10:11.087 00:10:11.087 ' 00:10:11.087 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:11.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.087 --rc genhtml_branch_coverage=1 00:10:11.087 --rc genhtml_function_coverage=1 00:10:11.088 --rc genhtml_legend=1 00:10:11.088 --rc geninfo_all_blocks=1 00:10:11.088 --rc geninfo_unexecuted_blocks=1 00:10:11.088 00:10:11.088 ' 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:11.088 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:12.992 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:12.992 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:12.992 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:12.993 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:12.993 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:12.993 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:13.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:10:13.252 00:10:13.252 --- 10.0.0.2 ping statistics --- 00:10:13.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.252 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:10:13.252 00:10:13.252 --- 10.0.0.1 ping statistics --- 00:10:13.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.252 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2903119 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2903119 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2903119 ']' 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.252 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.252 [2024-11-19 20:59:47.024269] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:10:13.252 [2024-11-19 20:59:47.024432] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.511 [2024-11-19 20:59:47.175329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.770 [2024-11-19 20:59:47.313469] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.770 [2024-11-19 20:59:47.313539] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.770 [2024-11-19 20:59:47.313564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.770 [2024-11-19 20:59:47.313589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.770 [2024-11-19 20:59:47.313608] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.770 [2024-11-19 20:59:47.315231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.337 20:59:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.337 20:59:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:14.337 20:59:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:14.337 20:59:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:14.337 20:59:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.337 [2024-11-19 20:59:48.019441] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.337 [2024-11-19 20:59:48.035702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.337 malloc0 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:14.337 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:14.337 { 00:10:14.337 "params": { 00:10:14.337 "name": "Nvme$subsystem", 00:10:14.337 "trtype": "$TEST_TRANSPORT", 00:10:14.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:14.337 "adrfam": "ipv4", 00:10:14.337 "trsvcid": "$NVMF_PORT", 00:10:14.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:14.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:14.337 "hdgst": ${hdgst:-false}, 00:10:14.338 "ddgst": ${ddgst:-false} 00:10:14.338 }, 00:10:14.338 "method": "bdev_nvme_attach_controller" 00:10:14.338 } 00:10:14.338 EOF 00:10:14.338 )") 00:10:14.338 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:14.338 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:14.338 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:14.338 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:14.338 "params": { 00:10:14.338 "name": "Nvme1", 00:10:14.338 "trtype": "tcp", 00:10:14.338 "traddr": "10.0.0.2", 00:10:14.338 "adrfam": "ipv4", 00:10:14.338 "trsvcid": "4420", 00:10:14.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:14.338 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:14.338 "hdgst": false, 00:10:14.338 "ddgst": false 00:10:14.338 }, 00:10:14.338 "method": "bdev_nvme_attach_controller" 00:10:14.338 }' 00:10:14.596 [2024-11-19 20:59:48.196479] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:10:14.596 [2024-11-19 20:59:48.196635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903280 ] 00:10:14.596 [2024-11-19 20:59:48.356960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.854 [2024-11-19 20:59:48.497776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.420 Running I/O for 10 seconds... 00:10:17.288 4083.00 IOPS, 31.90 MiB/s [2024-11-19T19:59:52.017Z] 4182.50 IOPS, 32.68 MiB/s [2024-11-19T19:59:53.393Z] 4232.00 IOPS, 33.06 MiB/s [2024-11-19T19:59:54.329Z] 4242.00 IOPS, 33.14 MiB/s [2024-11-19T19:59:55.265Z] 4240.80 IOPS, 33.13 MiB/s [2024-11-19T19:59:56.199Z] 4250.17 IOPS, 33.20 MiB/s [2024-11-19T19:59:57.135Z] 4254.29 IOPS, 33.24 MiB/s [2024-11-19T19:59:58.069Z] 4255.00 IOPS, 33.24 MiB/s [2024-11-19T19:59:59.444Z] 4257.56 IOPS, 33.26 MiB/s [2024-11-19T19:59:59.444Z] 4252.20 IOPS, 33.22 MiB/s 00:10:25.649 Latency(us) 00:10:25.649 [2024-11-19T19:59:59.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.649 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:25.649 Verification LBA range: start 0x0 length 0x1000 00:10:25.649 Nvme1n1 : 10.02 4255.05 33.24 0.00 0.00 30000.26 4369.07 40777.96 00:10:25.649 [2024-11-19T19:59:59.444Z] =================================================================================================================== 00:10:25.649 [2024-11-19T19:59:59.444Z] Total : 4255.05 33.24 0.00 0.00 30000.26 4369.07 40777.96 00:10:26.216 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2904608 00:10:26.216 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:26.216 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:26.216 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:26.216 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:26.216 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:26.216 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:26.216 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:26.216 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:26.216 { 00:10:26.216 "params": { 00:10:26.216 "name": "Nvme$subsystem", 00:10:26.216 "trtype": "$TEST_TRANSPORT", 00:10:26.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:26.216 "adrfam": "ipv4", 00:10:26.216 "trsvcid": "$NVMF_PORT", 00:10:26.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:26.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:26.216 "hdgst": ${hdgst:-false}, 00:10:26.216 "ddgst": ${ddgst:-false} 00:10:26.216 }, 00:10:26.216 "method": "bdev_nvme_attach_controller" 00:10:26.216 } 00:10:26.216 EOF 00:10:26.216 )") 00:10:26.216 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:26.216 [2024-11-19 20:59:59.979207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.216 [2024-11-19 20:59:59.979266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.216 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:26.216 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:26.216 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:26.216 "params": { 00:10:26.216 "name": "Nvme1", 00:10:26.216 "trtype": "tcp", 00:10:26.216 "traddr": "10.0.0.2", 00:10:26.216 "adrfam": "ipv4", 00:10:26.216 "trsvcid": "4420", 00:10:26.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:26.216 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:26.216 "hdgst": false, 00:10:26.216 "ddgst": false 00:10:26.216 }, 00:10:26.216 "method": "bdev_nvme_attach_controller" 00:10:26.216 }' 00:10:26.216 [2024-11-19 20:59:59.987159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.216 [2024-11-19 20:59:59.987196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.216 [2024-11-19 20:59:59.995137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.216 [2024-11-19 20:59:59.995171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.216 [2024-11-19 21:00:00.003179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.216 [2024-11-19 21:00:00.003215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.474 [2024-11-19 21:00:00.011245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.474 [2024-11-19 21:00:00.011286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.474 [2024-11-19 21:00:00.019217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.474 [2024-11-19 21:00:00.019255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.474 [2024-11-19 21:00:00.027257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.474 [2024-11-19 21:00:00.027293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.474 [2024-11-19 21:00:00.035270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.474 [2024-11-19 21:00:00.035305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.474 [2024-11-19 21:00:00.043279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.474 [2024-11-19 21:00:00.043317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.474 [2024-11-19 21:00:00.051331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.474 [2024-11-19 21:00:00.051376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.474 [2024-11-19 21:00:00.059320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.474 [2024-11-19 21:00:00.059363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.474 [2024-11-19 21:00:00.067374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.474 [2024-11-19 21:00:00.067408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.474 [2024-11-19 21:00:00.075402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.474 [2024-11-19 21:00:00.075437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.474 [2024-11-19 21:00:00.076790] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:10:26.474 [2024-11-19 21:00:00.076917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2904608 ] 00:10:26.475 [2024-11-19 21:00:00.083398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.083432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.091452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.091486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.099465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.099498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.107489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.107523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.115499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.115532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.123506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.123540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.131550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.131583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.139587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.139621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.147576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.147610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.155637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.155671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.163644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.163678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.171642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.171675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.179683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.179717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.187691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.187724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.195728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.195762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.203770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.203803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.211755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.211788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.219798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.219831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.227826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.227859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.230272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.475 [2024-11-19 21:00:00.235820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.235852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.243891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.243928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.251922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.251975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.259945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.259980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.475 [2024-11-19 21:00:00.267940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.475 [2024-11-19 21:00:00.267974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.275939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.275971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.283997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.284030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.292022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.292060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.300035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.300067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.308054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.308105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.316058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.316100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.324111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.324144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.332133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.332166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.340133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.340167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.348225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.348259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.356213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.356247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.364207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.364239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.371089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.734 [2024-11-19 21:00:00.372268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.372301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.380269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.380301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.388349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.388404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.396433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.396487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.404328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.404362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.412380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.412414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.420392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.420425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.428405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.428437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.436446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.436480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.444441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.444473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.452496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.452542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.460537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.460576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.468558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.468607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.476626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.476679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.484641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.484692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.492654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.492699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.500633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.500667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.508623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.508655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.516672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.516704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.734 [2024-11-19 21:00:00.524700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.734 [2024-11-19 21:00:00.524733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.993 [2024-11-19 21:00:00.532693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.993 [2024-11-19 21:00:00.532726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.993 [2024-11-19 21:00:00.540739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.993 [2024-11-19 21:00:00.540772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.993 [2024-11-19 21:00:00.548761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.993 [2024-11-19 21:00:00.548795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.993 [2024-11-19 21:00:00.556767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.993 [2024-11-19 21:00:00.556799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.993 [2024-11-19 21:00:00.564809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.993 [2024-11-19 21:00:00.564844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.993 [2024-11-19 21:00:00.572803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.993 [2024-11-19 21:00:00.572836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.993 [2024-11-19 21:00:00.580846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.993 [2024-11-19 21:00:00.580879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.993 [2024-11-19 21:00:00.588889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.993 [2024-11-19 21:00:00.588922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.993 [2024-11-19 21:00:00.596879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.993 [2024-11-19 21:00:00.596912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.993 [2024-11-19 21:00:00.604926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.993 [2024-11-19 21:00:00.604970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.993 [2024-11-19 21:00:00.612940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.993 [2024-11-19 21:00:00.612973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.993 [2024-11-19 21:00:00.621003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.993 [2024-11-19 21:00:00.621054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.993 [2024-11-19 21:00:00.629060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.993 [2024-11-19 21:00:00.629131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.993 [2024-11-19 21:00:00.637020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.993 [2024-11-19 21:00:00.637086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.993 [2024-11-19 21:00:00.645019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.993 [2024-11-19 21:00:00.645047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.993 [2024-11-19 21:00:00.653035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.993 [2024-11-19 21:00:00.653090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.993 [2024-11-19 21:00:00.661037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.993 [2024-11-19 21:00:00.661094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.993 [2024-11-19 21:00:00.669103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.993 [2024-11-19 21:00:00.669133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.993 [2024-11-19 21:00:00.677123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.993 [2024-11-19 21:00:00.677152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.993 [2024-11-19 21:00:00.685145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.993 [2024-11-19 21:00:00.685174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.994 [2024-11-19 21:00:00.693166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.994 [2024-11-19 21:00:00.693195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.994 [2024-11-19 21:00:00.701182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.994 [2024-11-19 21:00:00.701212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.994 [2024-11-19 21:00:00.709218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.994 [2024-11-19 21:00:00.709248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.994 [2024-11-19 21:00:00.717233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.994 [2024-11-19 21:00:00.717262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.994 [2024-11-19 21:00:00.725273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.994 [2024-11-19 21:00:00.725303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.994 [2024-11-19 21:00:00.733335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.994 [2024-11-19 21:00:00.733385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.994 [2024-11-19 21:00:00.741335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.994 [2024-11-19 21:00:00.741383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.994 [2024-11-19 21:00:00.749351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.994 [2024-11-19 21:00:00.749398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.994 [2024-11-19 21:00:00.757384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.994 [2024-11-19 21:00:00.757439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.994 [2024-11-19 21:00:00.765387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.994 [2024-11-19 21:00:00.765433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.994 [2024-11-19 21:00:00.773441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.994 [2024-11-19 21:00:00.773470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.994 [2024-11-19 21:00:00.781460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.994 [2024-11-19 21:00:00.781488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 [2024-11-19 21:00:00.789440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:00.789468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 [2024-11-19 21:00:00.797500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:00.797528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 [2024-11-19 21:00:00.805521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:00.805550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 [2024-11-19 21:00:00.813524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:00.813554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 [2024-11-19 21:00:00.821562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:00.821593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 [2024-11-19 21:00:00.829558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:00.829598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 [2024-11-19 21:00:00.837624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:00.837653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 [2024-11-19 21:00:00.845656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:00.845687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 [2024-11-19 21:00:00.853629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:00.853657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 Running I/O for 5 seconds... 00:10:27.252 [2024-11-19 21:00:00.867013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:00.867066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 [2024-11-19 21:00:00.880422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:00.880459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 [2024-11-19 21:00:00.896633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:00.896675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 [2024-11-19 21:00:00.911874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:00.911929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 [2024-11-19 21:00:00.924045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:00.924095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 [2024-11-19 21:00:00.938567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:00.938608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 [2024-11-19 21:00:00.953737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:00.953784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 [2024-11-19 21:00:00.968678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:00.968714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 [2024-11-19 21:00:00.983458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:00.983509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 [2024-11-19 21:00:00.998490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:00.998529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 [2024-11-19 21:00:01.013606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:01.013657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 [2024-11-19 21:00:01.028242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:01.028279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.252 [2024-11-19 21:00:01.042365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.252 [2024-11-19 21:00:01.042401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.511 [2024-11-19 21:00:01.056605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.511 [2024-11-19 21:00:01.056646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.511 [2024-11-19 21:00:01.071205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.511 [2024-11-19 21:00:01.071241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.511 [2024-11-19 21:00:01.085784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.511 [2024-11-19 21:00:01.085835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.511 [2024-11-19 21:00:01.100896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.511 [2024-11-19 21:00:01.100935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.511 [2024-11-19 21:00:01.113180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.511 [2024-11-19 21:00:01.113232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.511 [2024-11-19 21:00:01.127103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.511 [2024-11-19 21:00:01.127145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.511 [2024-11-19 21:00:01.141717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.511 [2024-11-19 21:00:01.141753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.511 [2024-11-19 21:00:01.156188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.511 [2024-11-19 21:00:01.156224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.511 [2024-11-19 21:00:01.171119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.511 [2024-11-19 21:00:01.171170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.511 [2024-11-19 21:00:01.185792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.511 [2024-11-19 21:00:01.185828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.511 [2024-11-19 21:00:01.200019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.511 [2024-11-19 21:00:01.200060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.511 [2024-11-19 21:00:01.214819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.511 [2024-11-19 21:00:01.214858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.511 [2024-11-19 21:00:01.229419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.511 [2024-11-19 21:00:01.229486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.511 [2024-11-19 21:00:01.243847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.511 [2024-11-19 21:00:01.243882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.511 [2024-11-19 21:00:01.258753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.511 [2024-11-19 21:00:01.258790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.511 [2024-11-19 21:00:01.272914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.511 [2024-11-19 21:00:01.272949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.511 [2024-11-19 21:00:01.288685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.511 [2024-11-19 21:00:01.288726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.511 [2024-11-19 21:00:01.303761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.511 [2024-11-19 21:00:01.303798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.769 [2024-11-19 21:00:01.319007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.769 [2024-11-19 21:00:01.319047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.769 [2024-11-19 21:00:01.333757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.769 [2024-11-19 21:00:01.333797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.769 [2024-11-19 21:00:01.348473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.769 [2024-11-19 21:00:01.348513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.769 [2024-11-19 21:00:01.363597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.769 [2024-11-19 21:00:01.363638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.769 [2024-11-19 21:00:01.378527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.769 [2024-11-19 21:00:01.378577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.769 [2024-11-19 21:00:01.393137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.769 [2024-11-19 21:00:01.393172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.769 [2024-11-19 21:00:01.407412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.769 [2024-11-19 21:00:01.407463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.769 [2024-11-19 21:00:01.421481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.769 [2024-11-19 21:00:01.421521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.769 [2024-11-19 21:00:01.435831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.769 [2024-11-19 21:00:01.435871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.769 [2024-11-19 21:00:01.450976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.769 [2024-11-19 21:00:01.451015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.769 [2024-11-19 21:00:01.466208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.769 [2024-11-19 21:00:01.466258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.769 [2024-11-19 21:00:01.477873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.769 [2024-11-19 21:00:01.477907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.769 [2024-11-19 21:00:01.491881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.769 [2024-11-19 21:00:01.491917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.769 [2024-11-19 21:00:01.506542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.769 [2024-11-19 21:00:01.506592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.769 [2024-11-19 21:00:01.521463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.769 [2024-11-19 21:00:01.521499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.769 [2024-11-19 21:00:01.535489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.769 [2024-11-19 21:00:01.535525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.769 [2024-11-19 21:00:01.549946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.769 [2024-11-19 21:00:01.549986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.029 [2024-11-19 21:00:01.564256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.029 [2024-11-19 21:00:01.564292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.029 [2024-11-19 21:00:01.579161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.029 [2024-11-19 21:00:01.579195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.029 [2024-11-19 21:00:01.593943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.029 [2024-11-19 21:00:01.593982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.029 [2024-11-19 21:00:01.608860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.029 [2024-11-19 21:00:01.608896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.029 [2024-11-19 21:00:01.624003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.029 [2024-11-19 21:00:01.624043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.029 [2024-11-19 21:00:01.639055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.029 [2024-11-19 21:00:01.639104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.029 [2024-11-19 21:00:01.654092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.029 [2024-11-19 21:00:01.654144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.029 [2024-11-19 21:00:01.669859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.029 [2024-11-19 21:00:01.669913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.029 [2024-11-19 21:00:01.685866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.029 [2024-11-19 21:00:01.685923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.029 [2024-11-19 21:00:01.701038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.029 [2024-11-19 21:00:01.701086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.029 [2024-11-19 21:00:01.715283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.029 [2024-11-19 21:00:01.715319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.029 [2024-11-19 21:00:01.730063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.029 [2024-11-19 21:00:01.730130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.029 [2024-11-19 21:00:01.744928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.029 [2024-11-19 21:00:01.744965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.029 [2024-11-19 21:00:01.760264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.029 [2024-11-19 21:00:01.760314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.029 [2024-11-19 21:00:01.775610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.029 [2024-11-19 21:00:01.775651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.029 [2024-11-19 21:00:01.790156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.029 [2024-11-19 21:00:01.790193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.029 [2024-11-19 21:00:01.804835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.029 [2024-11-19 21:00:01.804875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.029 [2024-11-19 21:00:01.820740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.029 [2024-11-19 21:00:01.820779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.288 [2024-11-19 21:00:01.835885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.288 [2024-11-19 21:00:01.835924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.288 [2024-11-19 21:00:01.851086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.288 [2024-11-19 21:00:01.851129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.288 8537.00 IOPS, 66.70 MiB/s [2024-11-19T20:00:02.083Z] [2024-11-19 21:00:01.865990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.288 [2024-11-19 21:00:01.866030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.288 [2024-11-19 21:00:01.881161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.288 [2024-11-19 21:00:01.881200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.288 [2024-11-19 21:00:01.897091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.288 [2024-11-19 21:00:01.897131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.288 [2024-11-19 21:00:01.913109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.288 [2024-11-19 21:00:01.913150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.288 [2024-11-19 21:00:01.929024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.288 [2024-11-19 21:00:01.929082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.288 [2024-11-19 21:00:01.944160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.288 [2024-11-19 21:00:01.944199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.288 [2024-11-19 21:00:01.959154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.288 [2024-11-19 21:00:01.959194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.288 [2024-11-19 21:00:01.973917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.288 [2024-11-19 21:00:01.973956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.288 [2024-11-19 21:00:01.989451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.288 [2024-11-19 21:00:01.989490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.288 [2024-11-19 21:00:02.005380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.288 [2024-11-19 21:00:02.005420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.288 [2024-11-19 21:00:02.017772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.288 [2024-11-19 21:00:02.017812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.288 [2024-11-19 21:00:02.030995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.288 [2024-11-19 21:00:02.031035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.288 [2024-11-19 21:00:02.045800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.288 [2024-11-19 21:00:02.045840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.288 [2024-11-19 21:00:02.060988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.288 [2024-11-19 21:00:02.061047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.288 [2024-11-19 21:00:02.075215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.288 [2024-11-19 21:00:02.075254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.546 [2024-11-19 21:00:02.090800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.546 [2024-11-19 21:00:02.090839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.546 [2024-11-19 21:00:02.103452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.546 [2024-11-19 21:00:02.103491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.546 [2024-11-19 21:00:02.118743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.546 [2024-11-19 21:00:02.118783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.546 [2024-11-19 21:00:02.133537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.546 [2024-11-19 21:00:02.133576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.546 [2024-11-19 21:00:02.148651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.546 [2024-11-19 21:00:02.148690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.546 [2024-11-19 21:00:02.161870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.546 [2024-11-19 21:00:02.161909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.546 [2024-11-19 21:00:02.176818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.546 [2024-11-19 21:00:02.176858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.546 [2024-11-19 21:00:02.192092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.546 [2024-11-19 21:00:02.192131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.546 [2024-11-19 21:00:02.207315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.546 [2024-11-19 21:00:02.207355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.546 [2024-11-19 21:00:02.222579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.546 [2024-11-19 21:00:02.222619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.546 [2024-11-19 21:00:02.237743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.546 [2024-11-19 21:00:02.237782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.546 [2024-11-19 21:00:02.252879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.546 [2024-11-19 21:00:02.252919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.546 [2024-11-19 21:00:02.265618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.546 [2024-11-19 21:00:02.265658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.546 [2024-11-19 21:00:02.280428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.546 [2024-11-19 21:00:02.280468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.546 [2024-11-19 21:00:02.295485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.547 [2024-11-19 21:00:02.295524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.547 [2024-11-19 21:00:02.310980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.547 [2024-11-19 21:00:02.311020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.547 [2024-11-19 21:00:02.323558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.547 [2024-11-19 21:00:02.323597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.547 [2024-11-19 21:00:02.338541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.547 [2024-11-19 21:00:02.338592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.805 [2024-11-19 21:00:02.354375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.805 [2024-11-19 21:00:02.354415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.805 [2024-11-19 21:00:02.370146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.805 [2024-11-19 21:00:02.370186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.805 [2024-11-19 21:00:02.385604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.805 [2024-11-19 21:00:02.385644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.805 [2024-11-19 21:00:02.398145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.805 [2024-11-19 21:00:02.398196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.805 [2024-11-19 21:00:02.413218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.805 [2024-11-19 21:00:02.413256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.805 [2024-11-19 21:00:02.428602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.805 [2024-11-19 21:00:02.428642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.805 [2024-11-19 21:00:02.444662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.805 [2024-11-19 21:00:02.444702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.805 [2024-11-19 21:00:02.460356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.805 [2024-11-19 21:00:02.460398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.805 [2024-11-19 21:00:02.475178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.805 [2024-11-19 21:00:02.475218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.805 [2024-11-19 21:00:02.490157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.805 [2024-11-19 21:00:02.490197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.805 [2024-11-19 21:00:02.504878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.805 [2024-11-19 21:00:02.504917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.805 [2024-11-19 21:00:02.520036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.805 [2024-11-19 21:00:02.520088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.805 [2024-11-19 21:00:02.535207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.805 [2024-11-19 21:00:02.535246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.805 [2024-11-19 21:00:02.550278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.805 [2024-11-19 21:00:02.550318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.805 [2024-11-19 21:00:02.565658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.805 [2024-11-19 21:00:02.565698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.805 [2024-11-19 21:00:02.580618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.805 [2024-11-19 21:00:02.580657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.805 [2024-11-19 21:00:02.595403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.805 [2024-11-19 21:00:02.595442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.080 [2024-11-19 21:00:02.610855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.080 [2024-11-19 21:00:02.610895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.080 [2024-11-19 21:00:02.625691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.080 [2024-11-19 21:00:02.625739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.080 [2024-11-19 21:00:02.640812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.080 [2024-11-19 21:00:02.640851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.080 [2024-11-19 21:00:02.656139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.080 [2024-11-19 21:00:02.656178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.080 [2024-11-19 21:00:02.671242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.080 [2024-11-19 21:00:02.671280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.080 [2024-11-19 21:00:02.686110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.080 [2024-11-19 21:00:02.686149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.080 [2024-11-19 21:00:02.701182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.080 [2024-11-19 21:00:02.701221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.080 [2024-11-19 21:00:02.713993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.080 [2024-11-19 21:00:02.714032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.080 [2024-11-19 21:00:02.729339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.080 [2024-11-19 21:00:02.729378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.080 [2024-11-19 21:00:02.743847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.081 [2024-11-19 21:00:02.743886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.081 [2024-11-19 21:00:02.758813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.081 [2024-11-19 21:00:02.758853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.081 [2024-11-19 21:00:02.773665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.081 [2024-11-19 21:00:02.773704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.081 [2024-11-19 21:00:02.787919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.081 [2024-11-19 21:00:02.787958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.081 [2024-11-19 21:00:02.803038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.081 [2024-11-19 21:00:02.803091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.081 [2024-11-19 21:00:02.818202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.081 [2024-11-19 21:00:02.818241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.081 [2024-11-19 21:00:02.832970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.081 [2024-11-19 21:00:02.833009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.081 [2024-11-19 21:00:02.848406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.081 [2024-11-19 21:00:02.848446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.081 [2024-11-19 21:00:02.863336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.081 [2024-11-19 21:00:02.863377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.380 8457.00 IOPS, 66.07 MiB/s [2024-11-19T20:00:03.175Z] [2024-11-19 21:00:02.878491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.380 [2024-11-19 21:00:02.878532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.380 [2024-11-19 21:00:02.894081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.380 [2024-11-19 21:00:02.894121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.380 [2024-11-19 21:00:02.909534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.380 [2024-11-19 21:00:02.909574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.380 [2024-11-19 21:00:02.924848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.380 [2024-11-19 21:00:02.924887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.380 [2024-11-19 21:00:02.937748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.380 [2024-11-19 21:00:02.937788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.380 [2024-11-19 21:00:02.952493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.380 [2024-11-19 21:00:02.952532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.380 [2024-11-19 21:00:02.967781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.380 [2024-11-19 21:00:02.967820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.380 [2024-11-19 21:00:02.983047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.380 [2024-11-19 21:00:02.983097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.380 [2024-11-19 21:00:02.999016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.380 [2024-11-19 21:00:02.999055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.380 [2024-11-19 21:00:03.014831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.380 [2024-11-19 21:00:03.014883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.380 [2024-11-19 21:00:03.029892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.380 [2024-11-19 21:00:03.029931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.380 [2024-11-19 21:00:03.045381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.380 [2024-11-19 21:00:03.045420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.380 [2024-11-19 21:00:03.060639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.380 [2024-11-19 21:00:03.060678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.380 [2024-11-19 21:00:03.076147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.380 [2024-11-19 21:00:03.076187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.380 [2024-11-19 21:00:03.092086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.380 [2024-11-19 21:00:03.092124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.380 [2024-11-19 21:00:03.106951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.380 [2024-11-19 21:00:03.106990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.380 [2024-11-19 21:00:03.121986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.380 [2024-11-19 21:00:03.122026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.380 [2024-11-19 21:00:03.137552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.380 [2024-11-19 21:00:03.137591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.380 [2024-11-19 21:00:03.152441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.380 [2024-11-19 21:00:03.152480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.639 [2024-11-19 21:00:03.167226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.639 [2024-11-19 21:00:03.167267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.639 [2024-11-19 21:00:03.182542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.639 [2024-11-19 21:00:03.182583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.639 [2024-11-19 21:00:03.194949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.639 [2024-11-19 21:00:03.194989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.639 [2024-11-19 21:00:03.209891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.639 [2024-11-19 21:00:03.209932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.639 [2024-11-19 21:00:03.225461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.639 [2024-11-19 21:00:03.225501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.639 [2024-11-19 21:00:03.240502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.639 [2024-11-19 21:00:03.240542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.639 [2024-11-19 21:00:03.255879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.639 [2024-11-19 21:00:03.255919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.639 [2024-11-19 21:00:03.269545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.639 [2024-11-19 21:00:03.269581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.639 [2024-11-19 21:00:03.282367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.639 [2024-11-19 21:00:03.282403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.639 [2024-11-19 21:00:03.296453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.639 [2024-11-19 21:00:03.296489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.639 [2024-11-19 21:00:03.310751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.639 [2024-11-19 21:00:03.310788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.639 [2024-11-19 21:00:03.324941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.639 [2024-11-19 21:00:03.324977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.639 [2024-11-19 21:00:03.339770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.639 [2024-11-19 21:00:03.339820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.639 [2024-11-19 21:00:03.354365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.639 [2024-11-19 21:00:03.354402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.639 [2024-11-19 21:00:03.368363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.639 [2024-11-19 21:00:03.368399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.639 [2024-11-19 21:00:03.382744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.639 [2024-11-19 21:00:03.382796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.639 [2024-11-19 21:00:03.397365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.639 [2024-11-19 21:00:03.397401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.639 [2024-11-19 21:00:03.411325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.639 [2024-11-19 21:00:03.411376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.639 [2024-11-19 21:00:03.425654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.639 [2024-11-19 21:00:03.425690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.898 [2024-11-19 21:00:03.439497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.898 [2024-11-19 21:00:03.439534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.898 [2024-11-19 21:00:03.453638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.898 [2024-11-19 21:00:03.453675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.898 [2024-11-19 21:00:03.468170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.898 [2024-11-19 21:00:03.468206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.898 [2024-11-19 21:00:03.481992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.898 [2024-11-19 21:00:03.482028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.898 [2024-11-19 21:00:03.495829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.898 [2024-11-19 21:00:03.495865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.898 [2024-11-19 21:00:03.509741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.898 [2024-11-19 21:00:03.509794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.898 [2024-11-19 21:00:03.524484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.898 [2024-11-19 21:00:03.524525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.898 [2024-11-19 21:00:03.539664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.898 [2024-11-19 21:00:03.539704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.898 [2024-11-19 21:00:03.554828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.898 [2024-11-19 21:00:03.554880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.898 [2024-11-19 21:00:03.569693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.898 [2024-11-19 21:00:03.569744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.898 [2024-11-19 21:00:03.584434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.898 [2024-11-19 21:00:03.584469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.898 [2024-11-19 21:00:03.599348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.898 [2024-11-19 21:00:03.599389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.898 [2024-11-19 21:00:03.614049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.898 [2024-11-19 21:00:03.614103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.898 [2024-11-19 21:00:03.629419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.898 [2024-11-19 21:00:03.629455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.898 [2024-11-19 21:00:03.643555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.898 [2024-11-19 21:00:03.643595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.898 [2024-11-19 21:00:03.658323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.898 [2024-11-19 21:00:03.658379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.898 [2024-11-19 21:00:03.672648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.898 [2024-11-19 21:00:03.672683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.898 [2024-11-19 21:00:03.687652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.898 [2024-11-19 21:00:03.687692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.157 [2024-11-19 21:00:03.703408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.157 [2024-11-19 21:00:03.703448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.157 [2024-11-19 21:00:03.715213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.157 [2024-11-19 21:00:03.715249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.157 [2024-11-19 21:00:03.729250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.157 [2024-11-19 21:00:03.729285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.157 [2024-11-19 21:00:03.744018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.157 [2024-11-19 21:00:03.744053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.157 [2024-11-19 21:00:03.759181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.157 [2024-11-19 21:00:03.759216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.157 [2024-11-19 21:00:03.773538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.157 [2024-11-19 21:00:03.773573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.157 [2024-11-19 21:00:03.788398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.157 [2024-11-19 21:00:03.788433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.157 [2024-11-19 21:00:03.803613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.157 [2024-11-19 21:00:03.803649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.157 [2024-11-19 21:00:03.818557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.157 [2024-11-19 21:00:03.818608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.157 [2024-11-19 21:00:03.833026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.157 [2024-11-19 21:00:03.833066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.157 [2024-11-19 21:00:03.847511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.157 [2024-11-19 21:00:03.847550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.157 [2024-11-19 21:00:03.861733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.157 [2024-11-19 21:00:03.861769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.157 8487.33 IOPS, 66.31 MiB/s [2024-11-19T20:00:03.952Z] [2024-11-19 21:00:03.875540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.157 [2024-11-19 21:00:03.875575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.157 [2024-11-19 21:00:03.890745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.157 [2024-11-19 21:00:03.890781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.157 [2024-11-19 21:00:03.905060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.157 [2024-11-19 21:00:03.905110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.157 [2024-11-19 21:00:03.919342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.157 [2024-11-19 21:00:03.919394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.157 [2024-11-19 21:00:03.934335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.157 [2024-11-19 21:00:03.934387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.157 [2024-11-19 21:00:03.948513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.157 [2024-11-19 21:00:03.948579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.416 [2024-11-19 21:00:03.963373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.416 [2024-11-19 21:00:03.963408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.416 [2024-11-19 21:00:03.977965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.416 [2024-11-19 21:00:03.978005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.416 [2024-11-19 21:00:03.991932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.416 [2024-11-19 21:00:03.991982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.416 [2024-11-19 21:00:04.006444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.416 [2024-11-19 21:00:04.006497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.416 [2024-11-19 21:00:04.021273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.416 [2024-11-19 21:00:04.021308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.416 [2024-11-19 21:00:04.035928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.416 [2024-11-19 21:00:04.035962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.416 [2024-11-19 21:00:04.050796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.416 [2024-11-19 21:00:04.050836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.416 [2024-11-19 21:00:04.065479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.416 [2024-11-19 21:00:04.065515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.416 [2024-11-19 21:00:04.080693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.416 [2024-11-19 21:00:04.080744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.416 [2024-11-19 21:00:04.092579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.416 [2024-11-19 21:00:04.092618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.416 [2024-11-19 21:00:04.106753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.416 [2024-11-19 21:00:04.106793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.416 [2024-11-19 21:00:04.121670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.416 [2024-11-19 21:00:04.121706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.416 [2024-11-19 21:00:04.136036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.416 [2024-11-19 21:00:04.136087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.416 [2024-11-19 21:00:04.151057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.416 [2024-11-19 21:00:04.151107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.416 [2024-11-19 21:00:04.165106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.416 [2024-11-19 21:00:04.165159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.416 [2024-11-19 21:00:04.179580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.416 [2024-11-19 21:00:04.179630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.416 [2024-11-19 21:00:04.194483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.416 [2024-11-19 21:00:04.194522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.416 [2024-11-19 21:00:04.208967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.416 [2024-11-19 21:00:04.209006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.675 [2024-11-19 21:00:04.223628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.675 [2024-11-19 21:00:04.223668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.675 [2024-11-19 21:00:04.238713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.675 [2024-11-19 21:00:04.238753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.675 [2024-11-19 21:00:04.254607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.675 [2024-11-19 21:00:04.254647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.675 [2024-11-19 21:00:04.270502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.675 [2024-11-19 21:00:04.270541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.675 [2024-11-19 21:00:04.283326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.675 [2024-11-19 21:00:04.283379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.675 [2024-11-19 21:00:04.298440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.675 [2024-11-19 21:00:04.298480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.675 [2024-11-19 21:00:04.313763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.675 [2024-11-19 21:00:04.313803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.675 [2024-11-19 21:00:04.328847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.675 [2024-11-19 21:00:04.328887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.675 [2024-11-19 21:00:04.344003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.675 [2024-11-19 21:00:04.344042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.675 [2024-11-19 21:00:04.357044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.675 [2024-11-19 21:00:04.357094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.675 [2024-11-19 21:00:04.372159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.675 [2024-11-19 21:00:04.372198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.675 [2024-11-19 21:00:04.387775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.675 [2024-11-19 21:00:04.387814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.675 [2024-11-19 21:00:04.402855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.675 [2024-11-19 21:00:04.402894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.675 [2024-11-19 21:00:04.418231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.675 [2024-11-19 21:00:04.418271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.675 [2024-11-19 21:00:04.434251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.675 [2024-11-19 21:00:04.434290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.675 [2024-11-19 21:00:04.449458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.675 [2024-11-19 21:00:04.449497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.675 [2024-11-19 21:00:04.465479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.675 [2024-11-19 21:00:04.465517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.935 [2024-11-19 21:00:04.481026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.935 [2024-11-19 21:00:04.481065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.935 [2024-11-19 21:00:04.496554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.935 [2024-11-19 21:00:04.496594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.935 [2024-11-19 21:00:04.512247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.935 [2024-11-19 21:00:04.512287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.935 [2024-11-19 21:00:04.527557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.935 [2024-11-19 21:00:04.527597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.935 [2024-11-19 21:00:04.542457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.935 [2024-11-19 21:00:04.542497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.935 [2024-11-19 21:00:04.556917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.935 [2024-11-19 21:00:04.556956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.935 [2024-11-19 21:00:04.571809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.935 [2024-11-19 21:00:04.571860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.935 [2024-11-19 21:00:04.586707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.935 [2024-11-19 21:00:04.586746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.935 [2024-11-19 21:00:04.602473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.935 [2024-11-19 21:00:04.602512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.935 [2024-11-19 21:00:04.617970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.935 [2024-11-19 21:00:04.618009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.935 [2024-11-19 21:00:04.633543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.935 [2024-11-19 21:00:04.633582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.935 [2024-11-19 21:00:04.648365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.935 [2024-11-19 21:00:04.648404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.935 [2024-11-19 21:00:04.663564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.935 [2024-11-19 21:00:04.663603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.935 [2024-11-19 21:00:04.679011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.935 [2024-11-19 21:00:04.679051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.935 [2024-11-19 21:00:04.694035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.935 [2024-11-19 21:00:04.694086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.935 [2024-11-19 21:00:04.709180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.935 [2024-11-19 21:00:04.709220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.935 [2024-11-19 21:00:04.725006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.935 [2024-11-19 21:00:04.725046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.194 [2024-11-19 21:00:04.741634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.194 [2024-11-19 21:00:04.741674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.194 [2024-11-19 21:00:04.757612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.194 [2024-11-19 21:00:04.757652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.194 [2024-11-19 21:00:04.773128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.194 [2024-11-19 21:00:04.773167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.194 [2024-11-19 21:00:04.789200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.194 [2024-11-19 21:00:04.789240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.194 [2024-11-19 21:00:04.805586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.194 [2024-11-19 21:00:04.805626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.194 [2024-11-19 21:00:04.821693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.194 [2024-11-19 21:00:04.821733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.194 [2024-11-19 21:00:04.837475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.194 [2024-11-19 21:00:04.837514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.194 [2024-11-19 21:00:04.852830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.194 [2024-11-19 21:00:04.852869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.194 [2024-11-19 21:00:04.867803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.194 [2024-11-19 21:00:04.867843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.194 8467.75 IOPS, 66.15 MiB/s [2024-11-19T20:00:04.989Z] [2024-11-19 21:00:04.883693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.194 [2024-11-19 21:00:04.883731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.194 [2024-11-19 21:00:04.898834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.194 [2024-11-19 21:00:04.898873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.194 [2024-11-19 21:00:04.914544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.194 [2024-11-19 21:00:04.914596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.194 [2024-11-19 21:00:04.929955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.194 [2024-11-19 21:00:04.929996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.194 [2024-11-19 21:00:04.944834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.194 [2024-11-19 21:00:04.944873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.194 [2024-11-19 21:00:04.959877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.194 [2024-11-19 21:00:04.959916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.195 [2024-11-19 21:00:04.974743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.195 [2024-11-19 21:00:04.974782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.454 [2024-11-19 21:00:04.989446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.454 [2024-11-19 21:00:04.989486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.454 [2024-11-19 21:00:05.004670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.454 [2024-11-19 21:00:05.004709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.454 [2024-11-19 21:00:05.019755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.454 [2024-11-19 21:00:05.019794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.454 [2024-11-19 21:00:05.034515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.454 [2024-11-19 21:00:05.034554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.454 [2024-11-19 21:00:05.049253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.454 [2024-11-19 21:00:05.049304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.454 [2024-11-19 21:00:05.064403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.454 [2024-11-19 21:00:05.064443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.454 [2024-11-19 21:00:05.079591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.454 [2024-11-19 21:00:05.079630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.454 [2024-11-19 21:00:05.094057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.454 [2024-11-19 21:00:05.094106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.454 [2024-11-19 21:00:05.109093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.454 [2024-11-19 21:00:05.109152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.454 [2024-11-19 21:00:05.124486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.454 [2024-11-19 21:00:05.124526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.454 [2024-11-19 21:00:05.139518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.454 [2024-11-19 21:00:05.139557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.454 [2024-11-19 21:00:05.154871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.454 [2024-11-19 21:00:05.154910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.454 [2024-11-19 21:00:05.167866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.454 [2024-11-19 21:00:05.167906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.454 [2024-11-19 21:00:05.183383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.454 [2024-11-19 21:00:05.183423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.454 [2024-11-19 21:00:05.198335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.454 [2024-11-19 21:00:05.198374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.454 [2024-11-19 21:00:05.213492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.454 [2024-11-19 21:00:05.213532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.454 [2024-11-19 21:00:05.228923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.454 [2024-11-19 21:00:05.228963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.454 [2024-11-19 21:00:05.244051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.454 [2024-11-19 21:00:05.244101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.713 [2024-11-19 21:00:05.259394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.713 [2024-11-19 21:00:05.259434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.713 [2024-11-19 21:00:05.274954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.713 [2024-11-19 21:00:05.274994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.713 [2024-11-19 21:00:05.290245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.713 [2024-11-19 21:00:05.290285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.713 [2024-11-19 21:00:05.305236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.713 [2024-11-19 21:00:05.305276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.713 [2024-11-19 21:00:05.320234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.713 [2024-11-19 21:00:05.320274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.713 [2024-11-19 21:00:05.335319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.713 [2024-11-19 21:00:05.335359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.713 [2024-11-19 21:00:05.350477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.713 [2024-11-19 21:00:05.350518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.713 [2024-11-19 21:00:05.365292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.713 [2024-11-19 21:00:05.365332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.713 [2024-11-19 21:00:05.380312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.713 [2024-11-19 21:00:05.380351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.713 [2024-11-19 21:00:05.396410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.713 [2024-11-19 21:00:05.396450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.713 [2024-11-19 21:00:05.411641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.713 [2024-11-19 21:00:05.411681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.713 [2024-11-19 21:00:05.426570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.713 [2024-11-19 21:00:05.426618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.713 [2024-11-19 21:00:05.441652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.713 [2024-11-19 21:00:05.441692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.713 [2024-11-19 21:00:05.456804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.713 [2024-11-19 21:00:05.456844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.713 [2024-11-19 21:00:05.472589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.713 [2024-11-19 21:00:05.472629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.713 [2024-11-19 21:00:05.487555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.713 [2024-11-19 21:00:05.487595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.713 [2024-11-19 21:00:05.502145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.713 [2024-11-19 21:00:05.502184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.972 [2024-11-19 21:00:05.516310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.972 [2024-11-19 21:00:05.516350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.972 [2024-11-19 21:00:05.531570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.972 [2024-11-19 21:00:05.531608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.972 [2024-11-19 21:00:05.547004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.972 [2024-11-19 21:00:05.547043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.972 [2024-11-19 21:00:05.561512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.972 [2024-11-19 21:00:05.561552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.972 [2024-11-19 21:00:05.576453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.972 [2024-11-19 21:00:05.576493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.972 [2024-11-19 21:00:05.591229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.972 [2024-11-19 21:00:05.591268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.972 [2024-11-19 21:00:05.605839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.972 [2024-11-19 21:00:05.605878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.972 [2024-11-19 21:00:05.621228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.972 [2024-11-19 21:00:05.621267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.972 [2024-11-19 21:00:05.636455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.972 [2024-11-19 21:00:05.636494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.972 [2024-11-19 21:00:05.651116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.972 [2024-11-19 21:00:05.651155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.972 [2024-11-19 21:00:05.667067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.972 [2024-11-19 21:00:05.667116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.972 [2024-11-19 21:00:05.682059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.972 [2024-11-19 21:00:05.682113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.972 [2024-11-19 21:00:05.696857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.972 [2024-11-19 21:00:05.696896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.972 [2024-11-19 21:00:05.711755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.972 [2024-11-19 21:00:05.711804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.972 [2024-11-19 21:00:05.726431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.972 [2024-11-19 21:00:05.726470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.972 [2024-11-19 21:00:05.741232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.972 [2024-11-19 21:00:05.741271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.972 [2024-11-19 21:00:05.755817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.972 [2024-11-19 21:00:05.755856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.231 [2024-11-19 21:00:05.771040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.231 [2024-11-19 21:00:05.771090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.231 [2024-11-19 21:00:05.786492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.231 [2024-11-19 21:00:05.786531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.231 [2024-11-19 21:00:05.801959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.231 [2024-11-19 21:00:05.801999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.231 [2024-11-19 21:00:05.816629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.231 [2024-11-19 21:00:05.816668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.231 [2024-11-19 21:00:05.831264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.231 [2024-11-19 21:00:05.831303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.231 [2024-11-19 21:00:05.846149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.231 [2024-11-19 21:00:05.846189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.231 [2024-11-19 21:00:05.861497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.231 [2024-11-19 21:00:05.861537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.231 8461.40 IOPS, 66.10 MiB/s [2024-11-19T20:00:06.026Z] [2024-11-19 21:00:05.876134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.231 [2024-11-19 21:00:05.876187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.231 [2024-11-19 21:00:05.884005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.232 [2024-11-19 21:00:05.884044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.232 00:10:32.232 Latency(us) 00:10:32.232 [2024-11-19T20:00:06.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:32.232 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:32.232 Nvme1n1 : 5.01 8462.11 66.11 0.00 0.00 15099.63 5000.15 24660.95 00:10:32.232 [2024-11-19T20:00:06.027Z] =================================================================================================================== 00:10:32.232 [2024-11-19T20:00:06.027Z] Total : 8462.11 66.11 0.00 0.00 15099.63 5000.15 24660.95 00:10:32.232 [2024-11-19 21:00:05.891269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.232 [2024-11-19 21:00:05.891306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.232 [2024-11-19 21:00:05.899353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.232 [2024-11-19 21:00:05.899391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.232 [2024-11-19 21:00:05.907407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.232 [2024-11-19 21:00:05.907443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.232 [2024-11-19 21:00:05.915412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.232 [2024-11-19 21:00:05.915458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.232 [2024-11-19 21:00:05.923405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.232 [2024-11-19 21:00:05.923439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.232 [2024-11-19 21:00:05.931458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.232 [2024-11-19 21:00:05.931492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.232 [2024-11-19 21:00:05.939578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.232 [2024-11-19 21:00:05.939636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.232 [2024-11-19 21:00:05.947547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.232 [2024-11-19 21:00:05.947606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.232 [2024-11-19 21:00:05.955563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.232 [2024-11-19 21:00:05.955592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.232 [2024-11-19 21:00:05.963494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.232 [2024-11-19 21:00:05.963522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.232 [2024-11-19 21:00:05.971584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.232 [2024-11-19 21:00:05.971613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.232 [2024-11-19 21:00:05.979572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.232 [2024-11-19 21:00:05.979600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.232 [2024-11-19 21:00:05.987594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.232 [2024-11-19 21:00:05.987621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.232 [2024-11-19 21:00:05.995615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.232 [2024-11-19 21:00:05.995643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.232 [2024-11-19 21:00:06.003635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.232 [2024-11-19 21:00:06.003662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.232 [2024-11-19 21:00:06.011639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.232 [2024-11-19 21:00:06.011666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.232 [2024-11-19 21:00:06.019745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.232 [2024-11-19 21:00:06.019790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.491 [2024-11-19 21:00:06.027798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.491 [2024-11-19 21:00:06.027853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.491 [2024-11-19 21:00:06.035831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.491 [2024-11-19 21:00:06.035887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.491 [2024-11-19 21:00:06.043762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.491 [2024-11-19 21:00:06.043791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.491 [2024-11-19 21:00:06.051784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.491 [2024-11-19 21:00:06.051812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.491 [2024-11-19 21:00:06.059794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.491 [2024-11-19 21:00:06.059821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.491 [2024-11-19 21:00:06.067813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.491 [2024-11-19 21:00:06.067842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.491 [2024-11-19 21:00:06.075818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.491 [2024-11-19 21:00:06.075845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.491 [2024-11-19 21:00:06.083860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.491 [2024-11-19 21:00:06.083887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.491 [2024-11-19 21:00:06.091861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.491 [2024-11-19 21:00:06.091887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.491 [2024-11-19 21:00:06.099904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.491 [2024-11-19 21:00:06.099931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.491 [2024-11-19 21:00:06.107928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.491 [2024-11-19 21:00:06.107956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.491 [2024-11-19 21:00:06.115961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.492 [2024-11-19 21:00:06.115988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.492 [2024-11-19 21:00:06.123973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.492 [2024-11-19 21:00:06.124001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.492 [2024-11-19 21:00:06.131992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.492 [2024-11-19 21:00:06.132019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.492 [2024-11-19 21:00:06.140022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.492 [2024-11-19 21:00:06.140049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.492 [2024-11-19 21:00:06.148066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.492 [2024-11-19 21:00:06.148107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.492 [2024-11-19 21:00:06.156045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.492 [2024-11-19 21:00:06.156097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.492 [2024-11-19 21:00:06.164132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.492 [2024-11-19 21:00:06.164162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.492 [2024-11-19 21:00:06.172165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.492 [2024-11-19 21:00:06.172197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.492 [2024-11-19 21:00:06.180219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.492 [2024-11-19 21:00:06.180277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.492 [2024-11-19 21:00:06.188272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.492 [2024-11-19 21:00:06.188315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.492 [2024-11-19 21:00:06.196307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.492 [2024-11-19 21:00:06.196337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.492 [2024-11-19 21:00:06.204206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.492 [2024-11-19 21:00:06.204235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.492 [2024-11-19 21:00:06.212259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.492 [2024-11-19 21:00:06.212292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.492 [2024-11-19 21:00:06.228318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.492 [2024-11-19 21:00:06.228353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.492 [2024-11-19 21:00:06.236374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.492 [2024-11-19 21:00:06.236418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.492 [2024-11-19 21:00:06.244470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.492 [2024-11-19 21:00:06.244531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.492 [2024-11-19 21:00:06.252476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.492 [2024-11-19 21:00:06.252536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.492 [2024-11-19 21:00:06.260539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.492 [2024-11-19 21:00:06.260605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.492 [2024-11-19 21:00:06.268456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.492 [2024-11-19 21:00:06.268491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.492 [2024-11-19 21:00:06.276426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.492 [2024-11-19 21:00:06.276458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.492 [2024-11-19 21:00:06.284500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.492 [2024-11-19 21:00:06.284534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.292497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.292531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.300535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.300568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.308557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.308590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.316562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.316594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.324609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.324643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.332626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.332658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.340640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.340673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.348671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.348704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.356668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.356702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.364717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.364751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.372748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.372781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.380742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.380774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.388786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.388819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.396808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.396851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.404811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.404844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.412861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.412894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.420915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.420969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.429003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.429067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.436931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.436965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.444951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.444984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.452975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.453010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.460994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.461028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.468990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.469023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.477049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.477092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.485039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.485080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.493113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.493142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.501133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.501162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.509136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.509165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.517172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.517200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.525198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.525227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.533183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.533211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.751 [2024-11-19 21:00:06.541271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.751 [2024-11-19 21:00:06.541301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.549330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.549387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.557300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.557329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.565311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.565340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.573294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.573322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.581335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.581378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.589362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.589389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.597364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.597390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.605424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.605457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.613409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.613456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.621471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.621505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.629495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.629528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.637511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.637548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.645605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.645662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.653566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.653600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.661568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.661601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.669634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.669668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.677608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.677648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.685664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.685697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.693690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.693724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.701675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.701708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.709735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.709768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.717727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.717756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.725749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.725781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.733822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.733856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.741809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.741844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.749853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.749886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.757857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.757891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.765855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.765887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.773926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.773959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.781966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.781999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 [2024-11-19 21:00:06.789923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.010 [2024-11-19 21:00:06.789956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2904608) - No such process 00:10:33.010 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2904608 00:10:33.010 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.010 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.010 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:33.010 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.010 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:33.010 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.010 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:33.269 delay0 00:10:33.269 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.269 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:33.269 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.269 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:33.269 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.269 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:33.269 [2024-11-19 21:00:07.015235] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:41.381 Initializing NVMe Controllers 00:10:41.381 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:41.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:41.381 Initialization complete. Launching workers. 00:10:41.381 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 266, failed: 12071 00:10:41.381 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 12242, failed to submit 95 00:10:41.381 success 12136, unsuccessful 106, failed 0 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.381 rmmod nvme_tcp 00:10:41.381 rmmod nvme_fabrics 00:10:41.381 rmmod nvme_keyring 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2903119 ']' 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2903119 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2903119 ']' 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2903119 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2903119 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2903119' 00:10:41.381 killing process with pid 2903119 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2903119 00:10:41.381 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2903119 00:10:41.949 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.949 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.949 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.949 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:41.949 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:41.949 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.949 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.950 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.950 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:41.950 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.950 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.950 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.857 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:43.857 00:10:43.857 real 0m32.947s 00:10:43.857 user 0m49.268s 00:10:43.857 sys 0m8.728s 00:10:43.858 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.858 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.858 ************************************ 00:10:43.858 END TEST nvmf_zcopy 00:10:43.858 ************************************ 00:10:43.858 21:00:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:43.858 21:00:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.858 21:00:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.858 21:00:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:43.858 ************************************ 00:10:43.858 START TEST nvmf_nmic 00:10:43.858 ************************************ 00:10:43.858 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:43.858 * Looking for test storage... 00:10:43.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.858 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:43.858 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:43.858 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:44.116 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:44.116 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.116 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.116 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.116 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.116 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.116 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.116 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.116 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.116 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.116 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.116 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.116 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:44.116 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:44.116 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.116 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.116 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:44.116 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:44.116 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.116 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:44.116 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:44.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.117 --rc genhtml_branch_coverage=1 00:10:44.117 --rc genhtml_function_coverage=1 00:10:44.117 --rc genhtml_legend=1 00:10:44.117 --rc geninfo_all_blocks=1 00:10:44.117 --rc geninfo_unexecuted_blocks=1 00:10:44.117 00:10:44.117 ' 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:44.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.117 --rc genhtml_branch_coverage=1 00:10:44.117 --rc genhtml_function_coverage=1 00:10:44.117 --rc genhtml_legend=1 00:10:44.117 --rc geninfo_all_blocks=1 00:10:44.117 --rc geninfo_unexecuted_blocks=1 00:10:44.117 00:10:44.117 ' 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:44.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.117 --rc genhtml_branch_coverage=1 00:10:44.117 --rc genhtml_function_coverage=1 00:10:44.117 --rc genhtml_legend=1 00:10:44.117 --rc geninfo_all_blocks=1 00:10:44.117 --rc geninfo_unexecuted_blocks=1 00:10:44.117 00:10:44.117 ' 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:44.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.117 --rc genhtml_branch_coverage=1 00:10:44.117 --rc genhtml_function_coverage=1 00:10:44.117 --rc genhtml_legend=1 00:10:44.117 --rc geninfo_all_blocks=1 00:10:44.117 --rc geninfo_unexecuted_blocks=1 00:10:44.117 00:10:44.117 ' 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.117 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:46.645 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.645 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:46.646 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:46.646 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:46.646 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:46.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:10:46.646 00:10:46.646 --- 10.0.0.2 ping statistics --- 00:10:46.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.646 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:46.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:10:46.646 00:10:46.646 --- 10.0.0.1 ping statistics --- 00:10:46.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.646 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:46.646 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.646 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2909022 00:10:46.647 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.647 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2909022 00:10:46.647 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2909022 ']' 00:10:46.647 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.647 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.647 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.647 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.647 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.647 [2024-11-19 21:00:20.108062] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:10:46.647 [2024-11-19 21:00:20.108247] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.647 [2024-11-19 21:00:20.284570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.647 [2024-11-19 21:00:20.432359] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.647 [2024-11-19 21:00:20.432447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.647 [2024-11-19 21:00:20.432472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.647 [2024-11-19 21:00:20.432496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.647 [2024-11-19 21:00:20.432515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.647 [2024-11-19 21:00:20.435388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.647 [2024-11-19 21:00:20.435446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.647 [2024-11-19 21:00:20.435514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.647 [2024-11-19 21:00:20.435520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.583 [2024-11-19 21:00:21.141585] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.583 Malloc0 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.583 [2024-11-19 21:00:21.262665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:47.583 test case1: single bdev can't be used in multiple subsystems 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.583 [2024-11-19 21:00:21.286312] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:47.583 [2024-11-19 21:00:21.286369] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:47.583 [2024-11-19 21:00:21.286403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.583 request: 00:10:47.583 { 00:10:47.583 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:47.583 "namespace": { 00:10:47.583 "bdev_name": "Malloc0", 00:10:47.583 "no_auto_visible": false 00:10:47.583 }, 00:10:47.583 "method": "nvmf_subsystem_add_ns", 00:10:47.583 "req_id": 1 00:10:47.583 } 00:10:47.583 Got JSON-RPC error response 00:10:47.583 response: 00:10:47.583 { 00:10:47.583 "code": -32602, 00:10:47.583 "message": "Invalid parameters" 00:10:47.583 } 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:47.583 Adding namespace failed - expected result. 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:47.583 test case2: host connect to nvmf target in multiple paths 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.583 [2024-11-19 21:00:21.294477] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.583 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:48.519 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:49.085 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:49.085 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:49.085 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.085 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:49.085 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:50.985 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:50.985 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:50.985 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:50.985 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:50.985 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:50.985 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:50.985 21:00:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:50.985 [global] 00:10:50.985 thread=1 00:10:50.985 invalidate=1 00:10:50.985 rw=write 00:10:50.985 time_based=1 00:10:50.985 runtime=1 00:10:50.985 ioengine=libaio 00:10:50.985 direct=1 00:10:50.985 bs=4096 00:10:50.985 iodepth=1 00:10:50.985 norandommap=0 00:10:50.985 numjobs=1 00:10:50.985 00:10:50.985 verify_dump=1 00:10:50.985 verify_backlog=512 00:10:50.985 verify_state_save=0 00:10:50.985 do_verify=1 00:10:50.985 verify=crc32c-intel 00:10:50.985 [job0] 00:10:50.985 filename=/dev/nvme0n1 00:10:50.985 Could not set queue depth (nvme0n1) 00:10:51.242 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.243 fio-3.35 00:10:51.243 Starting 1 thread 00:10:52.616 00:10:52.616 job0: (groupid=0, jobs=1): err= 0: pid=2909673: Tue Nov 19 21:00:26 2024 00:10:52.616 read: IOPS=20, BW=82.9KiB/s (84.9kB/s)(84.0KiB/1013msec) 00:10:52.616 slat (nsec): min=15472, max=38891, avg=23116.86, stdev=9151.40 00:10:52.616 clat (usec): min=40875, max=41115, avg=40972.76, stdev=68.05 00:10:52.616 lat (usec): min=40891, max=41135, avg=40995.88, stdev=64.98 00:10:52.616 clat percentiles (usec): 00:10:52.616 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:52.616 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:52.616 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:52.616 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:52.616 | 99.99th=[41157] 00:10:52.616 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:10:52.616 slat (usec): min=7, max=29380, avg=80.63, stdev=1297.45 00:10:52.616 clat (usec): min=162, max=344, avg=211.12, stdev=22.67 00:10:52.616 lat (usec): min=171, max=29704, avg=291.75, stdev=1302.65 00:10:52.616 clat percentiles (usec): 00:10:52.616 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 190], 20.00th=[ 196], 00:10:52.616 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 215], 00:10:52.616 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 235], 95.00th=[ 245], 00:10:52.616 | 99.00th=[ 302], 99.50th=[ 322], 99.90th=[ 347], 99.95th=[ 347], 00:10:52.616 | 99.99th=[ 347] 00:10:52.616 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:52.617 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:52.617 lat (usec) : 250=92.12%, 500=3.94% 00:10:52.617 lat (msec) : 50=3.94% 00:10:52.617 cpu : usr=0.99%, sys=1.19%, ctx=535, majf=0, minf=1 00:10:52.617 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.617 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.617 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.617 00:10:52.617 Run status group 0 (all jobs): 00:10:52.617 READ: bw=82.9KiB/s (84.9kB/s), 82.9KiB/s-82.9KiB/s (84.9kB/s-84.9kB/s), io=84.0KiB (86.0kB), run=1013-1013msec 00:10:52.617 WRITE: bw=2022KiB/s (2070kB/s), 2022KiB/s-2022KiB/s (2070kB/s-2070kB/s), io=2048KiB (2097kB), run=1013-1013msec 00:10:52.617 00:10:52.617 Disk stats (read/write): 00:10:52.617 nvme0n1: ios=44/512, merge=0/0, ticks=1723/111, in_queue=1834, util=98.70% 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:52.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:52.617 rmmod nvme_tcp 00:10:52.617 rmmod nvme_fabrics 00:10:52.617 rmmod nvme_keyring 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2909022 ']' 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2909022 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2909022 ']' 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2909022 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.617 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2909022 00:10:52.876 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:52.876 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:52.876 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2909022' 00:10:52.876 killing process with pid 2909022 00:10:52.876 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2909022 00:10:52.876 21:00:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2909022 00:10:54.251 21:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:54.251 21:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:54.251 21:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:54.251 21:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:54.251 21:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:54.251 21:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:54.251 21:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:54.251 21:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:54.251 21:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:54.251 21:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.251 21:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.251 21:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.171 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:56.171 00:10:56.171 real 0m12.172s 00:10:56.171 user 0m29.147s 00:10:56.171 sys 0m2.686s 00:10:56.171 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.171 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:56.171 ************************************ 00:10:56.171 END TEST nvmf_nmic 00:10:56.171 ************************************ 00:10:56.171 21:00:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:56.171 21:00:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:56.171 21:00:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.171 21:00:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:56.171 ************************************ 00:10:56.171 START TEST nvmf_fio_target 00:10:56.171 ************************************ 00:10:56.171 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:56.171 * Looking for test storage... 00:10:56.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.171 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:56.171 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:56.171 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:56.171 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:56.171 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:56.171 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:56.171 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:56.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.172 --rc genhtml_branch_coverage=1 00:10:56.172 --rc genhtml_function_coverage=1 00:10:56.172 --rc genhtml_legend=1 00:10:56.172 --rc geninfo_all_blocks=1 00:10:56.172 --rc geninfo_unexecuted_blocks=1 00:10:56.172 00:10:56.172 ' 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:56.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.172 --rc genhtml_branch_coverage=1 00:10:56.172 --rc genhtml_function_coverage=1 00:10:56.172 --rc genhtml_legend=1 00:10:56.172 --rc geninfo_all_blocks=1 00:10:56.172 --rc geninfo_unexecuted_blocks=1 00:10:56.172 00:10:56.172 ' 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:56.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.172 --rc genhtml_branch_coverage=1 00:10:56.172 --rc genhtml_function_coverage=1 00:10:56.172 --rc genhtml_legend=1 00:10:56.172 --rc geninfo_all_blocks=1 00:10:56.172 --rc geninfo_unexecuted_blocks=1 00:10:56.172 00:10:56.172 ' 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:56.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.172 --rc genhtml_branch_coverage=1 00:10:56.172 --rc genhtml_function_coverage=1 00:10:56.172 --rc genhtml_legend=1 00:10:56.172 --rc geninfo_all_blocks=1 00:10:56.172 --rc geninfo_unexecuted_blocks=1 00:10:56.172 00:10:56.172 ' 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:56.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:56.172 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.173 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.173 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.173 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:56.173 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:56.173 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:56.173 21:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.707 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:58.707 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:58.707 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:58.707 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:58.707 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:58.707 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:58.707 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:58.707 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:58.707 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:58.707 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:58.707 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:58.707 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:58.707 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:58.707 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:58.707 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:58.707 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:58.707 21:00:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:58.707 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:58.707 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:58.707 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:58.707 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:58.707 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:58.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:10:58.708 00:10:58.708 --- 10.0.0.2 ping statistics --- 00:10:58.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.708 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:58.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:10:58.708 00:10:58.708 --- 10.0.0.1 ping statistics --- 00:10:58.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.708 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2912008 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2912008 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2912008 ']' 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.708 21:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.708 [2024-11-19 21:00:32.395025] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:10:58.708 [2024-11-19 21:00:32.395212] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.965 [2024-11-19 21:00:32.546101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.965 [2024-11-19 21:00:32.682932] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.965 [2024-11-19 21:00:32.683012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.965 [2024-11-19 21:00:32.683038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.965 [2024-11-19 21:00:32.683063] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.965 [2024-11-19 21:00:32.683094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.965 [2024-11-19 21:00:32.685861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.965 [2024-11-19 21:00:32.685931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.965 [2024-11-19 21:00:32.686030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.965 [2024-11-19 21:00:32.686035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:59.899 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.899 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:59.899 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:59.899 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:59.899 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.899 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.899 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:00.157 [2024-11-19 21:00:33.698454] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.157 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.415 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:00.415 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.672 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:00.672 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:01.238 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:01.238 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:01.495 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:01.495 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:01.753 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:02.011 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:02.011 21:00:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:02.649 21:00:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:02.649 21:00:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:02.649 21:00:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:02.649 21:00:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:02.933 21:00:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:03.190 21:00:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:03.190 21:00:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:03.448 21:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:03.448 21:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:03.706 21:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:03.964 [2024-11-19 21:00:37.756380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.223 21:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:04.481 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:04.739 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:05.306 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:05.306 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:05.306 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.306 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:05.306 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:05.306 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:07.203 21:00:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:07.203 21:00:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:07.203 21:00:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.203 21:00:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:07.203 21:00:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.203 21:00:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:07.203 21:00:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:07.203 [global] 00:11:07.203 thread=1 00:11:07.203 invalidate=1 00:11:07.203 rw=write 00:11:07.203 time_based=1 00:11:07.203 runtime=1 00:11:07.203 ioengine=libaio 00:11:07.203 direct=1 00:11:07.203 bs=4096 00:11:07.203 iodepth=1 00:11:07.203 norandommap=0 00:11:07.203 numjobs=1 00:11:07.203 00:11:07.203 verify_dump=1 00:11:07.203 verify_backlog=512 00:11:07.203 verify_state_save=0 00:11:07.203 do_verify=1 00:11:07.203 verify=crc32c-intel 00:11:07.203 [job0] 00:11:07.203 filename=/dev/nvme0n1 00:11:07.203 [job1] 00:11:07.203 filename=/dev/nvme0n2 00:11:07.203 [job2] 00:11:07.203 filename=/dev/nvme0n3 00:11:07.203 [job3] 00:11:07.203 filename=/dev/nvme0n4 00:11:07.461 Could not set queue depth (nvme0n1) 00:11:07.461 Could not set queue depth (nvme0n2) 00:11:07.461 Could not set queue depth (nvme0n3) 00:11:07.461 Could not set queue depth (nvme0n4) 00:11:07.461 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.461 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.461 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.461 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.461 fio-3.35 00:11:07.461 Starting 4 threads 00:11:08.836 00:11:08.836 job0: (groupid=0, jobs=1): err= 0: pid=2913225: Tue Nov 19 21:00:42 2024 00:11:08.836 read: IOPS=20, BW=83.9KiB/s (85.9kB/s)(84.0KiB/1001msec) 00:11:08.836 slat (nsec): min=14409, max=42541, avg=24952.67, stdev=10811.95 00:11:08.836 clat (usec): min=365, max=41622, avg=37133.48, stdev=12176.84 00:11:08.836 lat (usec): min=406, max=41645, avg=37158.44, stdev=12172.66 00:11:08.836 clat percentiles (usec): 00:11:08.836 | 1.00th=[ 367], 5.00th=[ 660], 10.00th=[40633], 20.00th=[40633], 00:11:08.836 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:08.836 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:08.836 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:08.836 | 99.99th=[41681] 00:11:08.836 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:08.836 slat (nsec): min=10194, max=79140, avg=25576.55, stdev=12741.44 00:11:08.836 clat (usec): min=228, max=598, avg=397.92, stdev=74.97 00:11:08.836 lat (usec): min=246, max=634, avg=423.50, stdev=75.41 00:11:08.836 clat percentiles (usec): 00:11:08.836 | 1.00th=[ 231], 5.00th=[ 251], 10.00th=[ 281], 20.00th=[ 351], 00:11:08.836 | 30.00th=[ 371], 40.00th=[ 388], 50.00th=[ 404], 60.00th=[ 416], 00:11:08.836 | 70.00th=[ 441], 80.00th=[ 461], 90.00th=[ 490], 95.00th=[ 510], 00:11:08.836 | 99.00th=[ 578], 99.50th=[ 586], 99.90th=[ 603], 99.95th=[ 603], 00:11:08.836 | 99.99th=[ 603] 00:11:08.836 bw ( KiB/s): min= 4096, max= 4096, per=22.69%, avg=4096.00, stdev= 0.00, samples=1 00:11:08.836 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:08.836 lat (usec) : 250=4.69%, 500=85.37%, 750=6.38% 00:11:08.836 lat (msec) : 50=3.56% 00:11:08.836 cpu : usr=1.20%, sys=1.30%, ctx=534, majf=0, minf=1 00:11:08.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.836 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.836 job1: (groupid=0, jobs=1): err= 0: pid=2913228: Tue Nov 19 21:00:42 2024 00:11:08.836 read: IOPS=503, BW=2013KiB/s (2062kB/s)(2096KiB/1041msec) 00:11:08.836 slat (nsec): min=7020, max=38384, avg=11928.51, stdev=5803.57 00:11:08.836 clat (usec): min=246, max=41024, avg=1296.17, stdev=5808.40 00:11:08.836 lat (usec): min=255, max=41042, avg=1308.10, stdev=5810.08 00:11:08.836 clat percentiles (usec): 00:11:08.836 | 1.00th=[ 255], 5.00th=[ 277], 10.00th=[ 297], 20.00th=[ 371], 00:11:08.836 | 30.00th=[ 383], 40.00th=[ 396], 50.00th=[ 408], 60.00th=[ 449], 00:11:08.836 | 70.00th=[ 519], 80.00th=[ 578], 90.00th=[ 635], 95.00th=[ 660], 00:11:08.836 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:08.836 | 99.99th=[41157] 00:11:08.836 write: IOPS=983, BW=3935KiB/s (4029kB/s)(4096KiB/1041msec); 0 zone resets 00:11:08.836 slat (nsec): min=7226, max=71080, avg=17176.75, stdev=11399.53 00:11:08.836 clat (usec): min=165, max=682, avg=322.84, stdev=121.76 00:11:08.836 lat (usec): min=174, max=720, avg=340.02, stdev=127.01 00:11:08.836 clat percentiles (usec): 00:11:08.836 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 198], 00:11:08.836 | 30.00th=[ 212], 40.00th=[ 260], 50.00th=[ 302], 60.00th=[ 338], 00:11:08.836 | 70.00th=[ 396], 80.00th=[ 441], 90.00th=[ 510], 95.00th=[ 545], 00:11:08.836 | 99.00th=[ 611], 99.50th=[ 627], 99.90th=[ 652], 99.95th=[ 685], 00:11:08.836 | 99.99th=[ 685] 00:11:08.836 bw ( KiB/s): min= 3200, max= 4992, per=22.69%, avg=4096.00, stdev=1267.14, samples=2 00:11:08.836 iops : min= 800, max= 1248, avg=1024.00, stdev=316.78, samples=2 00:11:08.836 lat (usec) : 250=25.65%, 500=56.33%, 750=17.25%, 1000=0.06% 00:11:08.836 lat (msec) : 50=0.71% 00:11:08.836 cpu : usr=1.83%, sys=2.79%, ctx=1550, majf=0, minf=1 00:11:08.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.836 issued rwts: total=524,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.836 job2: (groupid=0, jobs=1): err= 0: pid=2913229: Tue Nov 19 21:00:42 2024 00:11:08.836 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:08.836 slat (nsec): min=5119, max=54991, avg=12533.11, stdev=6648.39 00:11:08.836 clat (usec): min=227, max=41183, avg=385.07, stdev=2077.00 00:11:08.836 lat (usec): min=234, max=41197, avg=397.60, stdev=2077.40 00:11:08.836 clat percentiles (usec): 00:11:08.836 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 258], 00:11:08.836 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:11:08.836 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 347], 00:11:08.836 | 99.00th=[ 498], 99.50th=[ 529], 99.90th=[41157], 99.95th=[41157], 00:11:08.836 | 99.99th=[41157] 00:11:08.836 write: IOPS=1625, BW=6501KiB/s (6658kB/s)(6508KiB/1001msec); 0 zone resets 00:11:08.836 slat (nsec): min=6926, max=49097, avg=15109.20, stdev=6412.81 00:11:08.836 clat (usec): min=178, max=1116, avg=217.23, stdev=39.09 00:11:08.836 lat (usec): min=187, max=1127, avg=232.34, stdev=39.38 00:11:08.836 clat percentiles (usec): 00:11:08.836 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 202], 00:11:08.836 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 212], 60.00th=[ 217], 00:11:08.836 | 70.00th=[ 221], 80.00th=[ 225], 90.00th=[ 233], 95.00th=[ 243], 00:11:08.836 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 848], 99.95th=[ 1123], 00:11:08.836 | 99.99th=[ 1123] 00:11:08.836 bw ( KiB/s): min= 6320, max= 6320, per=35.00%, avg=6320.00, stdev= 0.00, samples=1 00:11:08.836 iops : min= 1580, max= 1580, avg=1580.00, stdev= 0.00, samples=1 00:11:08.836 lat (usec) : 250=54.79%, 500=44.67%, 750=0.32%, 1000=0.06% 00:11:08.836 lat (msec) : 2=0.03%, 50=0.13% 00:11:08.836 cpu : usr=2.90%, sys=4.30%, ctx=3165, majf=0, minf=1 00:11:08.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.836 issued rwts: total=1536,1627,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.836 job3: (groupid=0, jobs=1): err= 0: pid=2913230: Tue Nov 19 21:00:42 2024 00:11:08.836 read: IOPS=1099, BW=4400KiB/s (4505kB/s)(4404KiB/1001msec) 00:11:08.836 slat (nsec): min=6166, max=69573, avg=15734.15, stdev=7563.81 00:11:08.836 clat (usec): min=259, max=981, avg=398.42, stdev=74.43 00:11:08.836 lat (usec): min=268, max=1013, avg=414.16, stdev=76.38 00:11:08.836 clat percentiles (usec): 00:11:08.836 | 1.00th=[ 269], 5.00th=[ 289], 10.00th=[ 306], 20.00th=[ 359], 00:11:08.836 | 30.00th=[ 375], 40.00th=[ 383], 50.00th=[ 392], 60.00th=[ 400], 00:11:08.836 | 70.00th=[ 408], 80.00th=[ 429], 90.00th=[ 498], 95.00th=[ 545], 00:11:08.836 | 99.00th=[ 644], 99.50th=[ 676], 99.90th=[ 742], 99.95th=[ 979], 00:11:08.836 | 99.99th=[ 979] 00:11:08.836 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:08.837 slat (nsec): min=7767, max=84388, avg=19126.08, stdev=11000.60 00:11:08.837 clat (usec): min=191, max=635, avg=327.19, stdev=99.90 00:11:08.837 lat (usec): min=199, max=678, avg=346.31, stdev=105.56 00:11:08.837 clat percentiles (usec): 00:11:08.837 | 1.00th=[ 204], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 235], 00:11:08.837 | 30.00th=[ 245], 40.00th=[ 262], 50.00th=[ 302], 60.00th=[ 343], 00:11:08.837 | 70.00th=[ 388], 80.00th=[ 416], 90.00th=[ 474], 95.00th=[ 515], 00:11:08.837 | 99.00th=[ 586], 99.50th=[ 611], 99.90th=[ 627], 99.95th=[ 635], 00:11:08.837 | 99.99th=[ 635] 00:11:08.837 bw ( KiB/s): min= 5368, max= 5368, per=29.73%, avg=5368.00, stdev= 0.00, samples=1 00:11:08.837 iops : min= 1342, max= 1342, avg=1342.00, stdev= 0.00, samples=1 00:11:08.837 lat (usec) : 250=19.11%, 500=72.89%, 750=7.96%, 1000=0.04% 00:11:08.837 cpu : usr=3.40%, sys=6.00%, ctx=2638, majf=0, minf=2 00:11:08.837 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.837 issued rwts: total=1101,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.837 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.837 00:11:08.837 Run status group 0 (all jobs): 00:11:08.837 READ: bw=11.9MiB/s (12.5MB/s), 83.9KiB/s-6138KiB/s (85.9kB/s-6285kB/s), io=12.4MiB (13.0MB), run=1001-1041msec 00:11:08.837 WRITE: bw=17.6MiB/s (18.5MB/s), 2046KiB/s-6501KiB/s (2095kB/s-6658kB/s), io=18.4MiB (19.2MB), run=1001-1041msec 00:11:08.837 00:11:08.837 Disk stats (read/write): 00:11:08.837 nvme0n1: ios=66/512, merge=0/0, ticks=1187/198, in_queue=1385, util=85.97% 00:11:08.837 nvme0n2: ios=569/1024, merge=0/0, ticks=536/311, in_queue=847, util=91.36% 00:11:08.837 nvme0n3: ios=1169/1536, merge=0/0, ticks=570/323, in_queue=893, util=95.10% 00:11:08.837 nvme0n4: ios=1081/1149, merge=0/0, ticks=1217/374, in_queue=1591, util=94.33% 00:11:08.837 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:08.837 [global] 00:11:08.837 thread=1 00:11:08.837 invalidate=1 00:11:08.837 rw=randwrite 00:11:08.837 time_based=1 00:11:08.837 runtime=1 00:11:08.837 ioengine=libaio 00:11:08.837 direct=1 00:11:08.837 bs=4096 00:11:08.837 iodepth=1 00:11:08.837 norandommap=0 00:11:08.837 numjobs=1 00:11:08.837 00:11:08.837 verify_dump=1 00:11:08.837 verify_backlog=512 00:11:08.837 verify_state_save=0 00:11:08.837 do_verify=1 00:11:08.837 verify=crc32c-intel 00:11:08.837 [job0] 00:11:08.837 filename=/dev/nvme0n1 00:11:08.837 [job1] 00:11:08.837 filename=/dev/nvme0n2 00:11:08.837 [job2] 00:11:08.837 filename=/dev/nvme0n3 00:11:08.837 [job3] 00:11:08.837 filename=/dev/nvme0n4 00:11:08.837 Could not set queue depth (nvme0n1) 00:11:08.837 Could not set queue depth (nvme0n2) 00:11:08.837 Could not set queue depth (nvme0n3) 00:11:08.837 Could not set queue depth (nvme0n4) 00:11:09.095 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.095 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.095 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.095 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.095 fio-3.35 00:11:09.095 Starting 4 threads 00:11:10.487 00:11:10.487 job0: (groupid=0, jobs=1): err= 0: pid=2913458: Tue Nov 19 21:00:43 2024 00:11:10.487 read: IOPS=20, BW=83.7KiB/s (85.8kB/s)(84.0KiB/1003msec) 00:11:10.487 slat (nsec): min=7441, max=35346, avg=17814.52, stdev=6600.72 00:11:10.487 clat (usec): min=40879, max=41076, avg=40973.90, stdev=51.73 00:11:10.487 lat (usec): min=40914, max=41095, avg=40991.72, stdev=49.12 00:11:10.487 clat percentiles (usec): 00:11:10.487 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:10.487 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:10.487 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:10.487 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:10.487 | 99.99th=[41157] 00:11:10.487 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:11:10.487 slat (nsec): min=7567, max=52043, avg=16297.24, stdev=8160.02 00:11:10.487 clat (usec): min=178, max=614, avg=257.00, stdev=90.04 00:11:10.487 lat (usec): min=190, max=623, avg=273.29, stdev=90.42 00:11:10.487 clat percentiles (usec): 00:11:10.487 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 210], 00:11:10.487 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 231], 00:11:10.487 | 70.00th=[ 239], 80.00th=[ 251], 90.00th=[ 408], 95.00th=[ 502], 00:11:10.487 | 99.00th=[ 562], 99.50th=[ 578], 99.90th=[ 611], 99.95th=[ 611], 00:11:10.487 | 99.99th=[ 611] 00:11:10.487 bw ( KiB/s): min= 4096, max= 4096, per=29.69%, avg=4096.00, stdev= 0.00, samples=1 00:11:10.487 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:10.487 lat (usec) : 250=76.36%, 500=14.45%, 750=5.25% 00:11:10.487 lat (msec) : 50=3.94% 00:11:10.487 cpu : usr=0.30%, sys=1.30%, ctx=536, majf=0, minf=1 00:11:10.487 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.487 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.487 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.487 job1: (groupid=0, jobs=1): err= 0: pid=2913459: Tue Nov 19 21:00:43 2024 00:11:10.487 read: IOPS=21, BW=84.7KiB/s (86.7kB/s)(88.0KiB/1039msec) 00:11:10.487 slat (nsec): min=12667, max=34531, avg=18181.00, stdev=5843.14 00:11:10.487 clat (usec): min=40833, max=41006, avg=40968.37, stdev=39.00 00:11:10.487 lat (usec): min=40855, max=41024, avg=40986.55, stdev=36.06 00:11:10.487 clat percentiles (usec): 00:11:10.487 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:10.487 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:10.487 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:10.487 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:10.487 | 99.99th=[41157] 00:11:10.487 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:11:10.487 slat (nsec): min=7342, max=66328, avg=14527.96, stdev=7457.99 00:11:10.487 clat (usec): min=196, max=560, avg=249.52, stdev=41.01 00:11:10.487 lat (usec): min=206, max=588, avg=264.05, stdev=43.83 00:11:10.487 clat percentiles (usec): 00:11:10.487 | 1.00th=[ 204], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 227], 00:11:10.487 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:11:10.487 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 281], 95.00th=[ 310], 00:11:10.487 | 99.00th=[ 433], 99.50th=[ 498], 99.90th=[ 562], 99.95th=[ 562], 00:11:10.487 | 99.99th=[ 562] 00:11:10.487 bw ( KiB/s): min= 4096, max= 4096, per=29.69%, avg=4096.00, stdev= 0.00, samples=1 00:11:10.487 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:10.487 lat (usec) : 250=65.36%, 500=30.15%, 750=0.37% 00:11:10.487 lat (msec) : 50=4.12% 00:11:10.487 cpu : usr=0.58%, sys=0.87%, ctx=534, majf=0, minf=2 00:11:10.487 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.487 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.487 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.487 job2: (groupid=0, jobs=1): err= 0: pid=2913460: Tue Nov 19 21:00:43 2024 00:11:10.487 read: IOPS=1547, BW=6191KiB/s (6339kB/s)(6432KiB/1039msec) 00:11:10.487 slat (nsec): min=4283, max=69106, avg=14902.57, stdev=10397.70 00:11:10.487 clat (usec): min=217, max=40398, avg=303.37, stdev=1002.04 00:11:10.487 lat (usec): min=222, max=40416, avg=318.27, stdev=1002.43 00:11:10.487 clat percentiles (usec): 00:11:10.487 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:11:10.487 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 260], 60.00th=[ 273], 00:11:10.487 | 70.00th=[ 285], 80.00th=[ 306], 90.00th=[ 334], 95.00th=[ 396], 00:11:10.487 | 99.00th=[ 494], 99.50th=[ 523], 99.90th=[ 553], 99.95th=[40633], 00:11:10.487 | 99.99th=[40633] 00:11:10.487 write: IOPS=1971, BW=7885KiB/s (8074kB/s)(8192KiB/1039msec); 0 zone resets 00:11:10.487 slat (nsec): min=5617, max=76483, avg=14137.08, stdev=9355.54 00:11:10.487 clat (usec): min=167, max=668, avg=236.32, stdev=93.12 00:11:10.487 lat (usec): min=173, max=685, avg=250.46, stdev=96.76 00:11:10.487 clat percentiles (usec): 00:11:10.487 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 182], 00:11:10.487 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 200], 00:11:10.487 | 70.00th=[ 217], 80.00th=[ 277], 90.00th=[ 388], 95.00th=[ 453], 00:11:10.487 | 99.00th=[ 570], 99.50th=[ 611], 99.90th=[ 635], 99.95th=[ 652], 00:11:10.487 | 99.99th=[ 668] 00:11:10.487 bw ( KiB/s): min= 8192, max= 8192, per=59.37%, avg=8192.00, stdev= 0.00, samples=2 00:11:10.487 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:11:10.487 lat (usec) : 250=58.32%, 500=39.93%, 750=1.72% 00:11:10.487 lat (msec) : 50=0.03% 00:11:10.487 cpu : usr=2.41%, sys=5.49%, ctx=3657, majf=0, minf=1 00:11:10.487 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.487 issued rwts: total=1608,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.487 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.487 job3: (groupid=0, jobs=1): err= 0: pid=2913461: Tue Nov 19 21:00:43 2024 00:11:10.487 read: IOPS=19, BW=78.1KiB/s (80.0kB/s)(80.0KiB/1024msec) 00:11:10.487 slat (nsec): min=8452, max=37448, avg=23006.90, stdev=9957.11 00:11:10.487 clat (usec): min=40879, max=42097, avg=41384.85, stdev=494.87 00:11:10.487 lat (usec): min=40916, max=42116, avg=41407.86, stdev=495.04 00:11:10.487 clat percentiles (usec): 00:11:10.487 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:10.487 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:10.487 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:10.487 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:10.487 | 99.99th=[42206] 00:11:10.487 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:11:10.487 slat (nsec): min=7836, max=68886, avg=19884.31, stdev=11589.02 00:11:10.487 clat (usec): min=213, max=620, avg=357.19, stdev=75.60 00:11:10.487 lat (usec): min=222, max=637, avg=377.08, stdev=73.95 00:11:10.487 clat percentiles (usec): 00:11:10.487 | 1.00th=[ 229], 5.00th=[ 251], 10.00th=[ 269], 20.00th=[ 285], 00:11:10.487 | 30.00th=[ 302], 40.00th=[ 322], 50.00th=[ 347], 60.00th=[ 392], 00:11:10.487 | 70.00th=[ 404], 80.00th=[ 424], 90.00th=[ 453], 95.00th=[ 478], 00:11:10.487 | 99.00th=[ 537], 99.50th=[ 578], 99.90th=[ 619], 99.95th=[ 619], 00:11:10.487 | 99.99th=[ 619] 00:11:10.488 bw ( KiB/s): min= 4096, max= 4096, per=29.69%, avg=4096.00, stdev= 0.00, samples=1 00:11:10.488 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:10.488 lat (usec) : 250=4.32%, 500=88.35%, 750=3.57% 00:11:10.488 lat (msec) : 50=3.76% 00:11:10.488 cpu : usr=0.59%, sys=1.37%, ctx=534, majf=0, minf=1 00:11:10.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.488 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.488 00:11:10.488 Run status group 0 (all jobs): 00:11:10.488 READ: bw=6433KiB/s (6588kB/s), 78.1KiB/s-6191KiB/s (80.0kB/s-6339kB/s), io=6684KiB (6844kB), run=1003-1039msec 00:11:10.488 WRITE: bw=13.5MiB/s (14.1MB/s), 1971KiB/s-7885KiB/s (2018kB/s-8074kB/s), io=14.0MiB (14.7MB), run=1003-1039msec 00:11:10.488 00:11:10.488 Disk stats (read/write): 00:11:10.488 nvme0n1: ios=56/512, merge=0/0, ticks=1296/120, in_queue=1416, util=96.69% 00:11:10.488 nvme0n2: ios=30/512, merge=0/0, ticks=710/122, in_queue=832, util=86.90% 00:11:10.488 nvme0n3: ios=1542/1536, merge=0/0, ticks=1037/366, in_queue=1403, util=98.12% 00:11:10.488 nvme0n4: ios=60/512, merge=0/0, ticks=781/170, in_queue=951, util=96.43% 00:11:10.488 21:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:10.488 [global] 00:11:10.488 thread=1 00:11:10.488 invalidate=1 00:11:10.488 rw=write 00:11:10.488 time_based=1 00:11:10.488 runtime=1 00:11:10.488 ioengine=libaio 00:11:10.488 direct=1 00:11:10.488 bs=4096 00:11:10.488 iodepth=128 00:11:10.488 norandommap=0 00:11:10.488 numjobs=1 00:11:10.488 00:11:10.488 verify_dump=1 00:11:10.488 verify_backlog=512 00:11:10.488 verify_state_save=0 00:11:10.488 do_verify=1 00:11:10.488 verify=crc32c-intel 00:11:10.488 [job0] 00:11:10.488 filename=/dev/nvme0n1 00:11:10.488 [job1] 00:11:10.488 filename=/dev/nvme0n2 00:11:10.488 [job2] 00:11:10.488 filename=/dev/nvme0n3 00:11:10.488 [job3] 00:11:10.488 filename=/dev/nvme0n4 00:11:10.488 Could not set queue depth (nvme0n1) 00:11:10.488 Could not set queue depth (nvme0n2) 00:11:10.488 Could not set queue depth (nvme0n3) 00:11:10.488 Could not set queue depth (nvme0n4) 00:11:10.488 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.488 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.488 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.488 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.488 fio-3.35 00:11:10.488 Starting 4 threads 00:11:11.867 00:11:11.867 job0: (groupid=0, jobs=1): err= 0: pid=2913692: Tue Nov 19 21:00:45 2024 00:11:11.867 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:11:11.867 slat (usec): min=2, max=34691, avg=153.94, stdev=1242.83 00:11:11.867 clat (usec): min=8273, max=81280, avg=19970.54, stdev=12203.99 00:11:11.867 lat (usec): min=8299, max=81314, avg=20124.48, stdev=12311.88 00:11:11.867 clat percentiles (usec): 00:11:11.867 | 1.00th=[ 9241], 5.00th=[11207], 10.00th=[12518], 20.00th=[12780], 00:11:11.867 | 30.00th=[13173], 40.00th=[13566], 50.00th=[14353], 60.00th=[16319], 00:11:11.867 | 70.00th=[19530], 80.00th=[23725], 90.00th=[42730], 95.00th=[51643], 00:11:11.867 | 99.00th=[58459], 99.50th=[58459], 99.90th=[58983], 99.95th=[73925], 00:11:11.867 | 99.99th=[81265] 00:11:11.867 write: IOPS=3357, BW=13.1MiB/s (13.8MB/s)(13.2MiB/1006msec); 0 zone resets 00:11:11.867 slat (usec): min=3, max=43774, avg=145.94, stdev=1236.37 00:11:11.867 clat (usec): min=900, max=110813, avg=19483.96, stdev=15371.68 00:11:11.867 lat (msec): min=7, max=110, avg=19.63, stdev=15.47 00:11:11.867 clat percentiles (msec): 00:11:11.867 | 1.00th=[ 9], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 13], 00:11:11.867 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 15], 00:11:11.867 | 70.00th=[ 17], 80.00th=[ 19], 90.00th=[ 36], 95.00th=[ 46], 00:11:11.867 | 99.00th=[ 101], 99.50th=[ 101], 99.90th=[ 101], 99.95th=[ 101], 00:11:11.867 | 99.99th=[ 111] 00:11:11.867 bw ( KiB/s): min=12208, max=13792, per=25.76%, avg=13000.00, stdev=1120.06, samples=2 00:11:11.867 iops : min= 3052, max= 3448, avg=3250.00, stdev=280.01, samples=2 00:11:11.867 lat (usec) : 1000=0.02% 00:11:11.867 lat (msec) : 10=2.56%, 20=74.65%, 50=17.86%, 100=4.25%, 250=0.67% 00:11:11.867 cpu : usr=4.88%, sys=7.56%, ctx=314, majf=0, minf=2 00:11:11.867 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:11.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.867 issued rwts: total=3072,3378,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.867 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.867 job1: (groupid=0, jobs=1): err= 0: pid=2913693: Tue Nov 19 21:00:45 2024 00:11:11.867 read: IOPS=2589, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1005msec) 00:11:11.867 slat (usec): min=2, max=56149, avg=173.76, stdev=1539.08 00:11:11.867 clat (usec): min=1985, max=106330, avg=22015.81, stdev=16184.83 00:11:11.867 lat (msec): min=4, max=106, avg=22.19, stdev=16.31 00:11:11.867 clat percentiles (msec): 00:11:11.867 | 1.00th=[ 8], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 14], 00:11:11.867 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 17], 00:11:11.867 | 70.00th=[ 21], 80.00th=[ 27], 90.00th=[ 42], 95.00th=[ 52], 00:11:11.867 | 99.00th=[ 79], 99.50th=[ 87], 99.90th=[ 90], 99.95th=[ 90], 00:11:11.867 | 99.99th=[ 107] 00:11:11.867 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:11:11.867 slat (usec): min=3, max=35204, avg=168.80, stdev=1465.56 00:11:11.867 clat (usec): min=5823, max=80796, avg=22108.45, stdev=13886.88 00:11:11.867 lat (usec): min=5880, max=80813, avg=22277.25, stdev=14010.67 00:11:11.867 clat percentiles (usec): 00:11:11.867 | 1.00th=[ 9896], 5.00th=[11076], 10.00th=[12256], 20.00th=[13566], 00:11:11.867 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14746], 60.00th=[15139], 00:11:11.867 | 70.00th=[22152], 80.00th=[32113], 90.00th=[44303], 95.00th=[51643], 00:11:11.867 | 99.00th=[67634], 99.50th=[67634], 99.90th=[68682], 99.95th=[73925], 00:11:11.867 | 99.99th=[81265] 00:11:11.867 bw ( KiB/s): min=11600, max=12288, per=23.67%, avg=11944.00, stdev=486.49, samples=2 00:11:11.867 iops : min= 2900, max= 3072, avg=2986.00, stdev=121.62, samples=2 00:11:11.867 lat (msec) : 2=0.02%, 10=1.76%, 20=67.11%, 50=23.72%, 100=7.37% 00:11:11.867 lat (msec) : 250=0.02% 00:11:11.867 cpu : usr=3.78%, sys=6.67%, ctx=280, majf=0, minf=1 00:11:11.867 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:11.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.867 issued rwts: total=2602,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.867 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.867 job2: (groupid=0, jobs=1): err= 0: pid=2913696: Tue Nov 19 21:00:45 2024 00:11:11.868 read: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec) 00:11:11.868 slat (usec): min=2, max=19225, avg=149.64, stdev=1154.97 00:11:11.868 clat (usec): min=5893, max=48220, avg=20230.61, stdev=6523.48 00:11:11.868 lat (usec): min=5912, max=48227, avg=20380.25, stdev=6608.43 00:11:11.868 clat percentiles (usec): 00:11:11.868 | 1.00th=[ 7898], 5.00th=[14091], 10.00th=[14615], 20.00th=[15533], 00:11:11.868 | 30.00th=[16319], 40.00th=[16909], 50.00th=[17695], 60.00th=[19006], 00:11:11.868 | 70.00th=[21627], 80.00th=[25822], 90.00th=[30016], 95.00th=[31589], 00:11:11.868 | 99.00th=[42206], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:11:11.868 | 99.99th=[47973] 00:11:11.868 write: IOPS=3287, BW=12.8MiB/s (13.5MB/s)(13.0MiB/1014msec); 0 zone resets 00:11:11.868 slat (usec): min=4, max=23762, avg=148.51, stdev=1051.62 00:11:11.868 clat (usec): min=6779, max=47495, avg=19938.07, stdev=7555.47 00:11:11.868 lat (usec): min=6795, max=47505, avg=20086.58, stdev=7645.60 00:11:11.868 clat percentiles (usec): 00:11:11.868 | 1.00th=[ 8455], 5.00th=[10159], 10.00th=[11994], 20.00th=[15139], 00:11:11.868 | 30.00th=[16057], 40.00th=[16450], 50.00th=[17171], 60.00th=[19268], 00:11:11.868 | 70.00th=[21890], 80.00th=[25297], 90.00th=[33162], 95.00th=[33817], 00:11:11.868 | 99.00th=[43254], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:11:11.868 | 99.99th=[47449] 00:11:11.868 bw ( KiB/s): min=12288, max=13368, per=25.42%, avg=12828.00, stdev=763.68, samples=2 00:11:11.868 iops : min= 3072, max= 3342, avg=3207.00, stdev=190.92, samples=2 00:11:11.868 lat (msec) : 10=3.20%, 20=60.07%, 50=36.73% 00:11:11.868 cpu : usr=5.13%, sys=6.52%, ctx=243, majf=0, minf=1 00:11:11.868 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:11.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.868 issued rwts: total=3072,3334,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.868 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.868 job3: (groupid=0, jobs=1): err= 0: pid=2913697: Tue Nov 19 21:00:45 2024 00:11:11.868 read: IOPS=2970, BW=11.6MiB/s (12.2MB/s)(11.8MiB/1019msec) 00:11:11.868 slat (usec): min=2, max=17273, avg=160.05, stdev=1164.70 00:11:11.868 clat (usec): min=6180, max=61113, avg=23243.09, stdev=11874.87 00:11:11.868 lat (usec): min=6187, max=65758, avg=23403.14, stdev=11962.79 00:11:11.868 clat percentiles (usec): 00:11:11.868 | 1.00th=[ 8717], 5.00th=[10814], 10.00th=[12780], 20.00th=[13566], 00:11:11.868 | 30.00th=[14746], 40.00th=[17957], 50.00th=[19530], 60.00th=[21627], 00:11:11.868 | 70.00th=[25297], 80.00th=[34341], 90.00th=[42206], 95.00th=[48497], 00:11:11.868 | 99.00th=[56361], 99.50th=[57410], 99.90th=[58983], 99.95th=[58983], 00:11:11.868 | 99.99th=[61080] 00:11:11.868 write: IOPS=3014, BW=11.8MiB/s (12.3MB/s)(12.0MiB/1019msec); 0 zone resets 00:11:11.868 slat (usec): min=3, max=19257, avg=118.50, stdev=955.24 00:11:11.868 clat (usec): min=3670, max=65841, avg=18440.14, stdev=9968.57 00:11:11.868 lat (usec): min=3676, max=65850, avg=18558.64, stdev=10035.04 00:11:11.868 clat percentiles (usec): 00:11:11.868 | 1.00th=[ 7308], 5.00th=[ 9503], 10.00th=[11338], 20.00th=[13173], 00:11:11.868 | 30.00th=[13829], 40.00th=[14091], 50.00th=[15008], 60.00th=[16581], 00:11:11.868 | 70.00th=[17957], 80.00th=[20055], 90.00th=[32113], 95.00th=[39060], 00:11:11.868 | 99.00th=[62129], 99.50th=[64226], 99.90th=[65799], 99.95th=[65799], 00:11:11.868 | 99.99th=[65799] 00:11:11.868 bw ( KiB/s): min=10384, max=14192, per=24.35%, avg=12288.00, stdev=2692.66, samples=2 00:11:11.868 iops : min= 2596, max= 3548, avg=3072.00, stdev=673.17, samples=2 00:11:11.868 lat (msec) : 4=0.13%, 10=6.08%, 20=59.81%, 50=30.81%, 100=3.16% 00:11:11.868 cpu : usr=2.36%, sys=4.91%, ctx=202, majf=0, minf=1 00:11:11.868 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:11.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.868 issued rwts: total=3027,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.868 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.868 00:11:11.868 Run status group 0 (all jobs): 00:11:11.868 READ: bw=45.1MiB/s (47.3MB/s), 10.1MiB/s-11.9MiB/s (10.6MB/s-12.5MB/s), io=46.0MiB (48.2MB), run=1005-1019msec 00:11:11.868 WRITE: bw=49.3MiB/s (51.7MB/s), 11.8MiB/s-13.1MiB/s (12.3MB/s-13.8MB/s), io=50.2MiB (52.7MB), run=1005-1019msec 00:11:11.868 00:11:11.868 Disk stats (read/write): 00:11:11.868 nvme0n1: ios=2610/2951, merge=0/0, ticks=23742/24474, in_queue=48216, util=89.98% 00:11:11.868 nvme0n2: ios=2088/2184, merge=0/0, ticks=21497/22716, in_queue=44213, util=96.54% 00:11:11.868 nvme0n3: ios=2609/2599, merge=0/0, ticks=47214/42012, in_queue=89226, util=96.56% 00:11:11.868 nvme0n4: ios=2576/2916, merge=0/0, ticks=41880/43251, in_queue=85131, util=98.00% 00:11:11.868 21:00:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:11.868 [global] 00:11:11.868 thread=1 00:11:11.868 invalidate=1 00:11:11.868 rw=randwrite 00:11:11.868 time_based=1 00:11:11.868 runtime=1 00:11:11.868 ioengine=libaio 00:11:11.868 direct=1 00:11:11.868 bs=4096 00:11:11.868 iodepth=128 00:11:11.868 norandommap=0 00:11:11.868 numjobs=1 00:11:11.868 00:11:11.868 verify_dump=1 00:11:11.868 verify_backlog=512 00:11:11.868 verify_state_save=0 00:11:11.868 do_verify=1 00:11:11.868 verify=crc32c-intel 00:11:11.868 [job0] 00:11:11.868 filename=/dev/nvme0n1 00:11:11.868 [job1] 00:11:11.868 filename=/dev/nvme0n2 00:11:11.868 [job2] 00:11:11.868 filename=/dev/nvme0n3 00:11:11.868 [job3] 00:11:11.868 filename=/dev/nvme0n4 00:11:11.868 Could not set queue depth (nvme0n1) 00:11:11.868 Could not set queue depth (nvme0n2) 00:11:11.868 Could not set queue depth (nvme0n3) 00:11:11.868 Could not set queue depth (nvme0n4) 00:11:12.127 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:12.127 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:12.127 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:12.127 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:12.127 fio-3.35 00:11:12.127 Starting 4 threads 00:11:13.502 00:11:13.502 job0: (groupid=0, jobs=1): err= 0: pid=2913991: Tue Nov 19 21:00:46 2024 00:11:13.502 read: IOPS=1577, BW=6310KiB/s (6462kB/s)(6424KiB/1018msec) 00:11:13.502 slat (usec): min=2, max=16969, avg=212.84, stdev=1258.87 00:11:13.502 clat (usec): min=7104, max=97009, avg=23747.44, stdev=15595.73 00:11:13.502 lat (usec): min=7112, max=97017, avg=23960.28, stdev=15725.92 00:11:13.502 clat percentiles (usec): 00:11:13.502 | 1.00th=[ 8979], 5.00th=[13566], 10.00th=[14222], 20.00th=[14353], 00:11:13.502 | 30.00th=[14484], 40.00th=[15008], 50.00th=[16188], 60.00th=[21103], 00:11:13.502 | 70.00th=[24249], 80.00th=[28705], 90.00th=[44303], 95.00th=[60556], 00:11:13.502 | 99.00th=[85459], 99.50th=[89654], 99.90th=[96994], 99.95th=[96994], 00:11:13.502 | 99.99th=[96994] 00:11:13.502 write: IOPS=2011, BW=8047KiB/s (8240kB/s)(8192KiB/1018msec); 0 zone resets 00:11:13.502 slat (usec): min=3, max=25242, avg=305.45, stdev=1576.45 00:11:13.502 clat (usec): min=923, max=160478, avg=43440.54, stdev=33926.96 00:11:13.502 lat (usec): min=932, max=160489, avg=43745.99, stdev=34137.40 00:11:13.502 clat percentiles (msec): 00:11:13.502 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 11], 20.00th=[ 17], 00:11:13.502 | 30.00th=[ 22], 40.00th=[ 30], 50.00th=[ 36], 60.00th=[ 40], 00:11:13.502 | 70.00th=[ 52], 80.00th=[ 60], 90.00th=[ 80], 95.00th=[ 133], 00:11:13.502 | 99.00th=[ 150], 99.50th=[ 157], 99.90th=[ 161], 99.95th=[ 161], 00:11:13.502 | 99.99th=[ 161] 00:11:13.502 bw ( KiB/s): min= 6848, max= 9072, per=16.49%, avg=7960.00, stdev=1572.61, samples=2 00:11:13.502 iops : min= 1712, max= 2268, avg=1990.00, stdev=393.15, samples=2 00:11:13.502 lat (usec) : 1000=0.19% 00:11:13.502 lat (msec) : 4=1.07%, 10=3.86%, 20=34.26%, 50=38.67%, 100=16.94% 00:11:13.502 lat (msec) : 250=5.01% 00:11:13.502 cpu : usr=1.47%, sys=2.65%, ctx=202, majf=0, minf=1 00:11:13.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:11:13.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.502 issued rwts: total=1606,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.502 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.502 job1: (groupid=0, jobs=1): err= 0: pid=2914004: Tue Nov 19 21:00:46 2024 00:11:13.502 read: IOPS=2660, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1010msec) 00:11:13.502 slat (usec): min=3, max=15725, avg=148.59, stdev=951.70 00:11:13.502 clat (usec): min=6363, max=50035, avg=18001.45, stdev=6947.20 00:11:13.502 lat (usec): min=7207, max=50052, avg=18150.04, stdev=7015.14 00:11:13.502 clat percentiles (usec): 00:11:13.502 | 1.00th=[ 9241], 5.00th=[12125], 10.00th=[12387], 20.00th=[13698], 00:11:13.502 | 30.00th=[14484], 40.00th=[15664], 50.00th=[16188], 60.00th=[16712], 00:11:13.502 | 70.00th=[17433], 80.00th=[20579], 90.00th=[24773], 95.00th=[35914], 00:11:13.502 | 99.00th=[46400], 99.50th=[47973], 99.90th=[50070], 99.95th=[50070], 00:11:13.502 | 99.99th=[50070] 00:11:13.502 write: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec); 0 zone resets 00:11:13.502 slat (usec): min=4, max=15258, avg=184.59, stdev=1033.61 00:11:13.502 clat (usec): min=4973, max=90820, avg=25908.70, stdev=18176.74 00:11:13.502 lat (usec): min=4982, max=90857, avg=26093.29, stdev=18293.79 00:11:13.502 clat percentiles (usec): 00:11:13.502 | 1.00th=[ 7111], 5.00th=[11600], 10.00th=[12256], 20.00th=[12518], 00:11:13.502 | 30.00th=[13304], 40.00th=[15795], 50.00th=[18220], 60.00th=[24249], 00:11:13.502 | 70.00th=[28967], 80.00th=[35390], 90.00th=[48497], 95.00th=[74974], 00:11:13.502 | 99.00th=[88605], 99.50th=[89654], 99.90th=[90702], 99.95th=[90702], 00:11:13.502 | 99.99th=[90702] 00:11:13.502 bw ( KiB/s): min=11760, max=12808, per=25.44%, avg=12284.00, stdev=741.05, samples=2 00:11:13.502 iops : min= 2940, max= 3202, avg=3071.00, stdev=185.26, samples=2 00:11:13.502 lat (msec) : 10=1.81%, 20=64.63%, 50=28.36%, 100=5.21% 00:11:13.502 cpu : usr=4.56%, sys=7.04%, ctx=259, majf=0, minf=1 00:11:13.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:13.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.502 issued rwts: total=2687,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.502 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.502 job2: (groupid=0, jobs=1): err= 0: pid=2914044: Tue Nov 19 21:00:46 2024 00:11:13.502 read: IOPS=2171, BW=8684KiB/s (8893kB/s)(8832KiB/1017msec) 00:11:13.502 slat (usec): min=2, max=41199, avg=215.55, stdev=1765.55 00:11:13.502 clat (msec): min=6, max=130, avg=26.28, stdev=21.71 00:11:13.502 lat (msec): min=6, max=130, avg=26.50, stdev=21.87 00:11:13.502 clat percentiles (msec): 00:11:13.502 | 1.00th=[ 9], 5.00th=[ 12], 10.00th=[ 14], 20.00th=[ 16], 00:11:13.502 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 18], 60.00th=[ 21], 00:11:13.502 | 70.00th=[ 25], 80.00th=[ 29], 90.00th=[ 63], 95.00th=[ 81], 00:11:13.502 | 99.00th=[ 113], 99.50th=[ 118], 99.90th=[ 131], 99.95th=[ 131], 00:11:13.502 | 99.99th=[ 131] 00:11:13.502 write: IOPS=2517, BW=9.83MiB/s (10.3MB/s)(10.0MiB/1017msec); 0 zone resets 00:11:13.502 slat (usec): min=3, max=17840, avg=187.72, stdev=1156.72 00:11:13.502 clat (usec): min=899, max=149838, avg=27725.68, stdev=28438.73 00:11:13.502 lat (usec): min=907, max=149873, avg=27913.40, stdev=28619.18 00:11:13.502 clat percentiles (msec): 00:11:13.502 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 13], 20.00th=[ 15], 00:11:13.502 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 16], 60.00th=[ 17], 00:11:13.502 | 70.00th=[ 18], 80.00th=[ 33], 90.00th=[ 69], 95.00th=[ 104], 00:11:13.502 | 99.00th=[ 136], 99.50th=[ 142], 99.90th=[ 150], 99.95th=[ 150], 00:11:13.502 | 99.99th=[ 150] 00:11:13.502 bw ( KiB/s): min= 4096, max=16384, per=21.21%, avg=10240.00, stdev=8688.93, samples=2 00:11:13.502 iops : min= 1024, max= 4096, avg=2560.00, stdev=2172.23, samples=2 00:11:13.502 lat (usec) : 1000=0.04% 00:11:13.502 lat (msec) : 10=4.19%, 20=63.59%, 50=20.18%, 100=7.86%, 250=4.13% 00:11:13.502 cpu : usr=2.17%, sys=3.84%, ctx=212, majf=0, minf=2 00:11:13.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:13.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.503 issued rwts: total=2208,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.503 job3: (groupid=0, jobs=1): err= 0: pid=2914046: Tue Nov 19 21:00:46 2024 00:11:13.503 read: IOPS=4213, BW=16.5MiB/s (17.3MB/s)(16.5MiB/1002msec) 00:11:13.503 slat (usec): min=3, max=6990, avg=112.25, stdev=636.53 00:11:13.503 clat (usec): min=1225, max=22127, avg=14223.17, stdev=2243.60 00:11:13.503 lat (usec): min=3580, max=22281, avg=14335.42, stdev=2294.82 00:11:13.503 clat percentiles (usec): 00:11:13.503 | 1.00th=[ 7504], 5.00th=[10421], 10.00th=[11994], 20.00th=[12911], 00:11:13.503 | 30.00th=[13304], 40.00th=[13566], 50.00th=[14222], 60.00th=[14746], 00:11:13.503 | 70.00th=[15008], 80.00th=[15270], 90.00th=[16909], 95.00th=[18482], 00:11:13.503 | 99.00th=[21103], 99.50th=[21627], 99.90th=[22152], 99.95th=[22152], 00:11:13.503 | 99.99th=[22152] 00:11:13.503 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:11:13.503 slat (usec): min=5, max=5447, avg=101.58, stdev=417.44 00:11:13.503 clat (usec): min=7753, max=23038, avg=14508.21, stdev=2038.92 00:11:13.503 lat (usec): min=7774, max=23050, avg=14609.79, stdev=2068.15 00:11:13.503 clat percentiles (usec): 00:11:13.503 | 1.00th=[ 8717], 5.00th=[11469], 10.00th=[12780], 20.00th=[13173], 00:11:13.503 | 30.00th=[13435], 40.00th=[13960], 50.00th=[14222], 60.00th=[14746], 00:11:13.503 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16581], 95.00th=[17957], 00:11:13.503 | 99.00th=[21365], 99.50th=[21890], 99.90th=[22414], 99.95th=[22414], 00:11:13.503 | 99.99th=[22938] 00:11:13.503 bw ( KiB/s): min=16384, max=20472, per=38.17%, avg=18428.00, stdev=2890.65, samples=2 00:11:13.503 iops : min= 4096, max= 5118, avg=4607.00, stdev=722.66, samples=2 00:11:13.503 lat (msec) : 2=0.01%, 4=0.09%, 10=2.41%, 20=95.20%, 50=2.29% 00:11:13.503 cpu : usr=7.39%, sys=11.79%, ctx=514, majf=0, minf=1 00:11:13.503 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:13.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:13.503 issued rwts: total=4222,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:13.503 00:11:13.503 Run status group 0 (all jobs): 00:11:13.503 READ: bw=41.1MiB/s (43.1MB/s), 6310KiB/s-16.5MiB/s (6462kB/s-17.3MB/s), io=41.9MiB (43.9MB), run=1002-1018msec 00:11:13.503 WRITE: bw=47.2MiB/s (49.4MB/s), 8047KiB/s-18.0MiB/s (8240kB/s-18.8MB/s), io=48.0MiB (50.3MB), run=1002-1018msec 00:11:13.503 00:11:13.503 Disk stats (read/write): 00:11:13.503 nvme0n1: ios=1574/1863, merge=0/0, ticks=21006/42366, in_queue=63372, util=99.00% 00:11:13.503 nvme0n2: ios=2085/2264, merge=0/0, ticks=37292/67597, in_queue=104889, util=98.58% 00:11:13.503 nvme0n3: ios=2073/2519, merge=0/0, ticks=31100/36823, in_queue=67923, util=94.67% 00:11:13.503 nvme0n4: ios=3584/3825, merge=0/0, ticks=25146/25371, in_queue=50517, util=89.66% 00:11:13.503 21:00:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:13.503 21:00:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2914178 00:11:13.503 21:00:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:13.503 21:00:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:13.503 [global] 00:11:13.503 thread=1 00:11:13.503 invalidate=1 00:11:13.503 rw=read 00:11:13.503 time_based=1 00:11:13.503 runtime=10 00:11:13.503 ioengine=libaio 00:11:13.503 direct=1 00:11:13.503 bs=4096 00:11:13.503 iodepth=1 00:11:13.503 norandommap=1 00:11:13.503 numjobs=1 00:11:13.503 00:11:13.503 [job0] 00:11:13.503 filename=/dev/nvme0n1 00:11:13.503 [job1] 00:11:13.503 filename=/dev/nvme0n2 00:11:13.503 [job2] 00:11:13.503 filename=/dev/nvme0n3 00:11:13.503 [job3] 00:11:13.503 filename=/dev/nvme0n4 00:11:13.503 Could not set queue depth (nvme0n1) 00:11:13.503 Could not set queue depth (nvme0n2) 00:11:13.503 Could not set queue depth (nvme0n3) 00:11:13.503 Could not set queue depth (nvme0n4) 00:11:13.503 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.503 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.503 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.503 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.503 fio-3.35 00:11:13.503 Starting 4 threads 00:11:16.783 21:00:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:16.783 21:00:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:16.783 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=294912, buflen=4096 00:11:16.784 fio: pid=2914279, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:16.784 21:00:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:16.784 21:00:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:16.784 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=323584, buflen=4096 00:11:16.784 fio: pid=2914278, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:17.042 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=561152, buflen=4096 00:11:17.042 fio: pid=2914276, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:17.042 21:00:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:17.042 21:00:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:17.609 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=3407872, buflen=4096 00:11:17.609 fio: pid=2914277, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:17.609 00:11:17.609 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2914276: Tue Nov 19 21:00:51 2024 00:11:17.609 read: IOPS=39, BW=157KiB/s (161kB/s)(548KiB/3489msec) 00:11:17.609 slat (usec): min=8, max=12009, avg=159.84, stdev=1218.79 00:11:17.609 clat (usec): min=304, max=41882, avg=25131.55, stdev=19702.93 00:11:17.609 lat (usec): min=319, max=52946, avg=25292.43, stdev=19856.84 00:11:17.609 clat percentiles (usec): 00:11:17.609 | 1.00th=[ 318], 5.00th=[ 404], 10.00th=[ 408], 20.00th=[ 416], 00:11:17.609 | 30.00th=[ 429], 40.00th=[40633], 50.00th=[40633], 60.00th=[40633], 00:11:17.609 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:17.609 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:17.609 | 99.99th=[41681] 00:11:17.609 bw ( KiB/s): min= 120, max= 184, per=13.39%, avg=157.33, stdev=23.55, samples=6 00:11:17.609 iops : min= 30, max= 46, avg=39.33, stdev= 5.89, samples=6 00:11:17.609 lat (usec) : 500=37.68%, 750=0.72% 00:11:17.609 lat (msec) : 50=60.87% 00:11:17.609 cpu : usr=0.11%, sys=0.00%, ctx=143, majf=0, minf=1 00:11:17.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.609 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.609 issued rwts: total=138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.609 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2914277: Tue Nov 19 21:00:51 2024 00:11:17.609 read: IOPS=217, BW=871KiB/s (892kB/s)(3328KiB/3822msec) 00:11:17.609 slat (usec): min=4, max=8906, avg=43.64, stdev=441.06 00:11:17.609 clat (usec): min=219, max=41958, avg=4532.44, stdev=12460.50 00:11:17.609 lat (usec): min=228, max=50027, avg=4576.11, stdev=12546.07 00:11:17.609 clat percentiles (usec): 00:11:17.609 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 239], 00:11:17.609 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 269], 60.00th=[ 281], 00:11:17.609 | 70.00th=[ 326], 80.00th=[ 343], 90.00th=[40633], 95.00th=[41157], 00:11:17.609 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:17.609 | 99.99th=[42206] 00:11:17.609 bw ( KiB/s): min= 96, max= 3400, per=80.36%, avg=942.14, stdev=1326.39, samples=7 00:11:17.609 iops : min= 24, max= 850, avg=235.43, stdev=331.68, samples=7 00:11:17.609 lat (usec) : 250=41.42%, 500=47.54%, 750=0.36% 00:11:17.609 lat (msec) : 2=0.12%, 50=10.44% 00:11:17.609 cpu : usr=0.21%, sys=0.42%, ctx=836, majf=0, minf=2 00:11:17.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.609 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.609 issued rwts: total=833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.609 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2914278: Tue Nov 19 21:00:51 2024 00:11:17.609 read: IOPS=24, BW=98.0KiB/s (100kB/s)(316KiB/3224msec) 00:11:17.609 slat (nsec): min=12130, max=36095, avg=23780.61, stdev=9703.03 00:11:17.609 clat (usec): min=679, max=41945, avg=40486.97, stdev=4538.85 00:11:17.609 lat (usec): min=715, max=41962, avg=40510.83, stdev=4537.44 00:11:17.609 clat percentiles (usec): 00:11:17.609 | 1.00th=[ 676], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:17.609 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:17.609 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:17.609 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:17.609 | 99.99th=[42206] 00:11:17.609 bw ( KiB/s): min= 96, max= 104, per=8.36%, avg=98.67, stdev= 4.13, samples=6 00:11:17.609 iops : min= 24, max= 26, avg=24.67, stdev= 1.03, samples=6 00:11:17.609 lat (usec) : 750=1.25% 00:11:17.609 lat (msec) : 50=97.50% 00:11:17.609 cpu : usr=0.12%, sys=0.00%, ctx=81, majf=0, minf=1 00:11:17.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.609 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.609 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.609 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2914279: Tue Nov 19 21:00:51 2024 00:11:17.609 read: IOPS=24, BW=98.1KiB/s (100kB/s)(288KiB/2935msec) 00:11:17.609 slat (nsec): min=8192, max=35938, avg=24344.40, stdev=9507.71 00:11:17.609 clat (usec): min=456, max=41337, avg=40411.08, stdev=4775.37 00:11:17.609 lat (usec): min=490, max=41345, avg=40435.54, stdev=4774.09 00:11:17.609 clat percentiles (usec): 00:11:17.609 | 1.00th=[ 457], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:17.609 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:17.609 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:17.609 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:17.609 | 99.99th=[41157] 00:11:17.609 bw ( KiB/s): min= 96, max= 104, per=8.28%, avg=97.60, stdev= 3.58, samples=5 00:11:17.609 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:11:17.609 lat (usec) : 500=1.37% 00:11:17.609 lat (msec) : 50=97.26% 00:11:17.609 cpu : usr=0.14%, sys=0.00%, ctx=74, majf=0, minf=1 00:11:17.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:17.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.609 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.609 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:17.609 00:11:17.609 Run status group 0 (all jobs): 00:11:17.609 READ: bw=1172KiB/s (1200kB/s), 98.0KiB/s-871KiB/s (100kB/s-892kB/s), io=4480KiB (4588kB), run=2935-3822msec 00:11:17.609 00:11:17.609 Disk stats (read/write): 00:11:17.609 nvme0n1: ios=174/0, merge=0/0, ticks=4435/0, in_queue=4435, util=98.80% 00:11:17.609 nvme0n2: ios=827/0, merge=0/0, ticks=3566/0, in_queue=3566, util=96.22% 00:11:17.609 nvme0n3: ios=76/0, merge=0/0, ticks=3078/0, in_queue=3078, util=96.73% 00:11:17.609 nvme0n4: ios=70/0, merge=0/0, ticks=2830/0, in_queue=2830, util=96.71% 00:11:17.609 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:17.609 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:17.867 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:17.868 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:18.125 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:18.125 21:00:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:18.384 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:18.384 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:18.951 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:18.951 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:19.209 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:19.209 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2914178 00:11:19.209 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:19.209 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.143 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.143 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:20.143 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:20.143 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.143 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:20.143 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.143 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:20.143 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:20.143 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:20.143 nvmf hotplug test: fio failed as expected 00:11:20.143 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.143 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:20.143 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:20.143 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:20.143 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:20.143 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:20.143 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:20.143 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:20.143 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:20.143 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:20.143 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:20.143 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:20.143 rmmod nvme_tcp 00:11:20.401 rmmod nvme_fabrics 00:11:20.401 rmmod nvme_keyring 00:11:20.401 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:20.401 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:20.401 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:20.401 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2912008 ']' 00:11:20.401 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2912008 00:11:20.401 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2912008 ']' 00:11:20.401 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2912008 00:11:20.401 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:20.401 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.401 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2912008 00:11:20.401 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.401 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.401 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2912008' 00:11:20.401 killing process with pid 2912008 00:11:20.401 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2912008 00:11:20.401 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2912008 00:11:21.777 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:21.777 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:21.777 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:21.777 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:21.777 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:21.777 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:21.777 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:21.777 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:21.777 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:21.777 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.777 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.777 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:23.684 00:11:23.684 real 0m27.404s 00:11:23.684 user 1m35.983s 00:11:23.684 sys 0m6.315s 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.684 ************************************ 00:11:23.684 END TEST nvmf_fio_target 00:11:23.684 ************************************ 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:23.684 ************************************ 00:11:23.684 START TEST nvmf_bdevio 00:11:23.684 ************************************ 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:23.684 * Looking for test storage... 00:11:23.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.684 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:23.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.685 --rc genhtml_branch_coverage=1 00:11:23.685 --rc genhtml_function_coverage=1 00:11:23.685 --rc genhtml_legend=1 00:11:23.685 --rc geninfo_all_blocks=1 00:11:23.685 --rc geninfo_unexecuted_blocks=1 00:11:23.685 00:11:23.685 ' 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:23.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.685 --rc genhtml_branch_coverage=1 00:11:23.685 --rc genhtml_function_coverage=1 00:11:23.685 --rc genhtml_legend=1 00:11:23.685 --rc geninfo_all_blocks=1 00:11:23.685 --rc geninfo_unexecuted_blocks=1 00:11:23.685 00:11:23.685 ' 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:23.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.685 --rc genhtml_branch_coverage=1 00:11:23.685 --rc genhtml_function_coverage=1 00:11:23.685 --rc genhtml_legend=1 00:11:23.685 --rc geninfo_all_blocks=1 00:11:23.685 --rc geninfo_unexecuted_blocks=1 00:11:23.685 00:11:23.685 ' 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:23.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.685 --rc genhtml_branch_coverage=1 00:11:23.685 --rc genhtml_function_coverage=1 00:11:23.685 --rc genhtml_legend=1 00:11:23.685 --rc geninfo_all_blocks=1 00:11:23.685 --rc geninfo_unexecuted_blocks=1 00:11:23.685 00:11:23.685 ' 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.685 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.686 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.686 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:23.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:23.686 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:23.686 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:23.686 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:23.686 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:23.686 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:23.686 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:23.686 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:23.686 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.686 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:23.686 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:23.686 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:23.686 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.686 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.686 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.686 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:23.686 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:23.686 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:23.686 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.216 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.216 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:26.216 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:26.216 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:26.216 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:26.217 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:26.217 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:26.217 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:26.217 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:26.217 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:26.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:11:26.217 00:11:26.217 --- 10.0.0.2 ping statistics --- 00:11:26.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.218 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:11:26.218 00:11:26.218 --- 10.0.0.1 ping statistics --- 00:11:26.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.218 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2917173 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2917173 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2917173 ']' 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.218 21:00:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.218 [2024-11-19 21:00:59.758495] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:11:26.218 [2024-11-19 21:00:59.758661] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.218 [2024-11-19 21:00:59.929851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.475 [2024-11-19 21:01:00.086449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.475 [2024-11-19 21:01:00.086535] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.476 [2024-11-19 21:01:00.086561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.476 [2024-11-19 21:01:00.086586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.476 [2024-11-19 21:01:00.086606] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.476 [2024-11-19 21:01:00.089496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:26.476 [2024-11-19 21:01:00.089551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:26.476 [2024-11-19 21:01:00.089668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.476 [2024-11-19 21:01:00.089672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:27.042 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.042 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:27.042 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:27.042 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:27.042 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.042 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.042 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:27.042 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.042 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.042 [2024-11-19 21:01:00.791713] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:27.042 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.042 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:27.042 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.042 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.301 Malloc0 00:11:27.301 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.301 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:27.301 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.301 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.301 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.301 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:27.301 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.301 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.301 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.301 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.301 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.301 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.301 [2024-11-19 21:01:00.907025] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:27.301 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.301 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:27.301 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:27.301 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:27.301 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:27.301 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:27.301 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:27.301 { 00:11:27.301 "params": { 00:11:27.301 "name": "Nvme$subsystem", 00:11:27.301 "trtype": "$TEST_TRANSPORT", 00:11:27.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:27.301 "adrfam": "ipv4", 00:11:27.301 "trsvcid": "$NVMF_PORT", 00:11:27.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:27.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:27.301 "hdgst": ${hdgst:-false}, 00:11:27.301 "ddgst": ${ddgst:-false} 00:11:27.301 }, 00:11:27.301 "method": "bdev_nvme_attach_controller" 00:11:27.301 } 00:11:27.301 EOF 00:11:27.301 )") 00:11:27.301 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:27.301 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:27.302 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:27.302 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:27.302 "params": { 00:11:27.302 "name": "Nvme1", 00:11:27.302 "trtype": "tcp", 00:11:27.302 "traddr": "10.0.0.2", 00:11:27.302 "adrfam": "ipv4", 00:11:27.302 "trsvcid": "4420", 00:11:27.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:27.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:27.302 "hdgst": false, 00:11:27.302 "ddgst": false 00:11:27.302 }, 00:11:27.302 "method": "bdev_nvme_attach_controller" 00:11:27.302 }' 00:11:27.302 [2024-11-19 21:01:00.995625] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:11:27.302 [2024-11-19 21:01:00.995772] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2917330 ] 00:11:27.560 [2024-11-19 21:01:01.138545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:27.560 [2024-11-19 21:01:01.273329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.560 [2024-11-19 21:01:01.273380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.560 [2024-11-19 21:01:01.273384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.128 I/O targets: 00:11:28.128 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:28.128 00:11:28.128 00:11:28.128 CUnit - A unit testing framework for C - Version 2.1-3 00:11:28.128 http://cunit.sourceforge.net/ 00:11:28.128 00:11:28.128 00:11:28.128 Suite: bdevio tests on: Nvme1n1 00:11:28.128 Test: blockdev write read block ...passed 00:11:28.128 Test: blockdev write zeroes read block ...passed 00:11:28.128 Test: blockdev write zeroes read no split ...passed 00:11:28.386 Test: blockdev write zeroes read split ...passed 00:11:28.386 Test: blockdev write zeroes read split partial ...passed 00:11:28.386 Test: blockdev reset ...[2024-11-19 21:01:02.024738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:28.386 [2024-11-19 21:01:02.024919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:11:28.386 [2024-11-19 21:01:02.041207] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:28.386 passed 00:11:28.386 Test: blockdev write read 8 blocks ...passed 00:11:28.386 Test: blockdev write read size > 128k ...passed 00:11:28.386 Test: blockdev write read invalid size ...passed 00:11:28.386 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:28.386 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:28.386 Test: blockdev write read max offset ...passed 00:11:28.646 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:28.646 Test: blockdev writev readv 8 blocks ...passed 00:11:28.646 Test: blockdev writev readv 30 x 1block ...passed 00:11:28.646 Test: blockdev writev readv block ...passed 00:11:28.646 Test: blockdev writev readv size > 128k ...passed 00:11:28.646 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:28.646 Test: blockdev comparev and writev ...[2024-11-19 21:01:02.294969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.646 [2024-11-19 21:01:02.295042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:28.646 [2024-11-19 21:01:02.295089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.646 [2024-11-19 21:01:02.295117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:28.646 [2024-11-19 21:01:02.295591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.646 [2024-11-19 21:01:02.295625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:28.646 [2024-11-19 21:01:02.295658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.646 [2024-11-19 21:01:02.295684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:28.646 [2024-11-19 21:01:02.296115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.646 [2024-11-19 21:01:02.296156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:28.646 [2024-11-19 21:01:02.296193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.646 [2024-11-19 21:01:02.296230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:28.646 [2024-11-19 21:01:02.296734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.646 [2024-11-19 21:01:02.296769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:28.646 [2024-11-19 21:01:02.296808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:28.646 [2024-11-19 21:01:02.296834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:28.646 passed 00:11:28.646 Test: blockdev nvme passthru rw ...passed 00:11:28.646 Test: blockdev nvme passthru vendor specific ...[2024-11-19 21:01:02.379462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:28.646 [2024-11-19 21:01:02.379519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:28.646 [2024-11-19 21:01:02.379725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:28.646 [2024-11-19 21:01:02.379757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:28.646 [2024-11-19 21:01:02.379961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:28.646 [2024-11-19 21:01:02.379996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:28.646 [2024-11-19 21:01:02.380194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:28.646 [2024-11-19 21:01:02.380226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:28.646 passed 00:11:28.646 Test: blockdev nvme admin passthru ...passed 00:11:28.646 Test: blockdev copy ...passed 00:11:28.646 00:11:28.646 Run Summary: Type Total Ran Passed Failed Inactive 00:11:28.646 suites 1 1 n/a 0 0 00:11:28.646 tests 23 23 23 0 0 00:11:28.646 asserts 152 152 152 0 n/a 00:11:28.646 00:11:28.646 Elapsed time = 1.271 seconds 00:11:29.581 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.581 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.581 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:29.581 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.581 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:29.581 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:29.581 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:29.581 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:29.581 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:29.581 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:29.581 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:29.581 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:29.581 rmmod nvme_tcp 00:11:29.581 rmmod nvme_fabrics 00:11:29.581 rmmod nvme_keyring 00:11:29.582 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:29.582 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:29.582 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:29.582 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2917173 ']' 00:11:29.582 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2917173 00:11:29.582 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2917173 ']' 00:11:29.582 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2917173 00:11:29.582 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:29.582 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.582 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2917173 00:11:29.582 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:29.582 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:29.582 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2917173' 00:11:29.582 killing process with pid 2917173 00:11:29.582 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2917173 00:11:29.582 21:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2917173 00:11:30.957 21:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:30.957 21:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:30.957 21:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:30.957 21:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:30.957 21:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:30.957 21:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:30.957 21:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:30.957 21:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:30.957 21:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:30.957 21:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.957 21:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.957 21:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.888 21:01:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:32.888 00:11:32.888 real 0m9.385s 00:11:32.888 user 0m22.560s 00:11:32.888 sys 0m2.518s 00:11:32.888 21:01:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.888 21:01:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.888 ************************************ 00:11:32.888 END TEST nvmf_bdevio 00:11:32.888 ************************************ 00:11:32.888 21:01:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:32.888 00:11:32.888 real 4m31.057s 00:11:32.888 user 11m50.279s 00:11:32.888 sys 1m9.689s 00:11:32.888 21:01:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.888 21:01:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:32.888 ************************************ 00:11:32.888 END TEST nvmf_target_core 00:11:32.888 ************************************ 00:11:32.888 21:01:06 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:32.888 21:01:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:32.888 21:01:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.888 21:01:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:33.147 ************************************ 00:11:33.147 START TEST nvmf_target_extra 00:11:33.147 ************************************ 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:33.148 * Looking for test storage... 00:11:33.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:33.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.148 --rc genhtml_branch_coverage=1 00:11:33.148 --rc genhtml_function_coverage=1 00:11:33.148 --rc genhtml_legend=1 00:11:33.148 --rc geninfo_all_blocks=1 00:11:33.148 --rc geninfo_unexecuted_blocks=1 00:11:33.148 00:11:33.148 ' 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:33.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.148 --rc genhtml_branch_coverage=1 00:11:33.148 --rc genhtml_function_coverage=1 00:11:33.148 --rc genhtml_legend=1 00:11:33.148 --rc geninfo_all_blocks=1 00:11:33.148 --rc geninfo_unexecuted_blocks=1 00:11:33.148 00:11:33.148 ' 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:33.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.148 --rc genhtml_branch_coverage=1 00:11:33.148 --rc genhtml_function_coverage=1 00:11:33.148 --rc genhtml_legend=1 00:11:33.148 --rc geninfo_all_blocks=1 00:11:33.148 --rc geninfo_unexecuted_blocks=1 00:11:33.148 00:11:33.148 ' 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:33.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.148 --rc genhtml_branch_coverage=1 00:11:33.148 --rc genhtml_function_coverage=1 00:11:33.148 --rc genhtml_legend=1 00:11:33.148 --rc geninfo_all_blocks=1 00:11:33.148 --rc geninfo_unexecuted_blocks=1 00:11:33.148 00:11:33.148 ' 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:33.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.148 21:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:33.148 ************************************ 00:11:33.148 START TEST nvmf_example 00:11:33.148 ************************************ 00:11:33.149 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:33.149 * Looking for test storage... 00:11:33.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.149 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:33.149 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:33.149 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:33.407 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:33.407 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.407 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.407 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.407 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.407 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.407 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.407 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.407 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.407 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.407 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.407 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.407 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:33.407 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:33.408 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.408 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.408 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:33.408 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:33.408 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.408 21:01:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:33.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.408 --rc genhtml_branch_coverage=1 00:11:33.408 --rc genhtml_function_coverage=1 00:11:33.408 --rc genhtml_legend=1 00:11:33.408 --rc geninfo_all_blocks=1 00:11:33.408 --rc geninfo_unexecuted_blocks=1 00:11:33.408 00:11:33.408 ' 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:33.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.408 --rc genhtml_branch_coverage=1 00:11:33.408 --rc genhtml_function_coverage=1 00:11:33.408 --rc genhtml_legend=1 00:11:33.408 --rc geninfo_all_blocks=1 00:11:33.408 --rc geninfo_unexecuted_blocks=1 00:11:33.408 00:11:33.408 ' 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:33.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.408 --rc genhtml_branch_coverage=1 00:11:33.408 --rc genhtml_function_coverage=1 00:11:33.408 --rc genhtml_legend=1 00:11:33.408 --rc geninfo_all_blocks=1 00:11:33.408 --rc geninfo_unexecuted_blocks=1 00:11:33.408 00:11:33.408 ' 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:33.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.408 --rc genhtml_branch_coverage=1 00:11:33.408 --rc genhtml_function_coverage=1 00:11:33.408 --rc genhtml_legend=1 00:11:33.408 --rc geninfo_all_blocks=1 00:11:33.408 --rc geninfo_unexecuted_blocks=1 00:11:33.408 00:11:33.408 ' 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:33.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.408 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.409 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.409 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:33.409 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:33.409 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:33.409 21:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:35.310 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:35.311 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:35.311 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:35.311 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:35.311 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.311 21:01:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.311 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.311 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.311 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:35.311 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.311 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.311 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.311 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:35.311 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:35.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:11:35.311 00:11:35.311 --- 10.0.0.2 ping statistics --- 00:11:35.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.311 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:11:35.311 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:11:35.311 00:11:35.311 --- 10.0.0.1 ping statistics --- 00:11:35.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.311 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:11:35.311 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.311 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:35.311 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:35.311 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.311 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:35.311 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:35.311 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.311 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:35.311 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:35.569 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:35.569 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:35.569 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:35.569 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.569 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:35.569 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:35.569 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2919738 00:11:35.569 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:35.570 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:35.570 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2919738 00:11:35.570 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2919738 ']' 00:11:35.570 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.570 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.570 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.570 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.570 21:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.503 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.762 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.762 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.762 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.762 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.762 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.762 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:36.762 21:01:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:48.962 Initializing NVMe Controllers 00:11:48.962 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:48.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:48.962 Initialization complete. Launching workers. 00:11:48.962 ======================================================== 00:11:48.962 Latency(us) 00:11:48.962 Device Information : IOPS MiB/s Average min max 00:11:48.962 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11750.49 45.90 5447.29 1321.43 15538.92 00:11:48.962 ======================================================== 00:11:48.962 Total : 11750.49 45.90 5447.29 1321.43 15538.92 00:11:48.962 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:48.962 rmmod nvme_tcp 00:11:48.962 rmmod nvme_fabrics 00:11:48.962 rmmod nvme_keyring 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2919738 ']' 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2919738 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2919738 ']' 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2919738 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2919738 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2919738' 00:11:48.962 killing process with pid 2919738 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2919738 00:11:48.962 21:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2919738 00:11:48.962 nvmf threads initialize successfully 00:11:48.962 bdev subsystem init successfully 00:11:48.962 created a nvmf target service 00:11:48.962 create targets's poll groups done 00:11:48.962 all subsystems of target started 00:11:48.962 nvmf target is running 00:11:48.962 all subsystems of target stopped 00:11:48.962 destroy targets's poll groups done 00:11:48.962 destroyed the nvmf target service 00:11:48.962 bdev subsystem finish successfully 00:11:48.962 nvmf threads destroy successfully 00:11:48.962 21:01:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:48.962 21:01:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:48.962 21:01:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:48.962 21:01:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:48.962 21:01:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:48.962 21:01:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:48.962 21:01:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:48.962 21:01:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:48.962 21:01:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:48.962 21:01:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.962 21:01:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.962 21:01:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.339 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:50.339 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:50.339 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:50.339 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.339 00:11:50.339 real 0m17.187s 00:11:50.339 user 0m48.947s 00:11:50.339 sys 0m3.208s 00:11:50.339 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.339 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.339 ************************************ 00:11:50.339 END TEST nvmf_example 00:11:50.339 ************************************ 00:11:50.339 21:01:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:50.339 21:01:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:50.339 21:01:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.339 21:01:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:50.339 ************************************ 00:11:50.339 START TEST nvmf_filesystem 00:11:50.339 ************************************ 00:11:50.339 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:50.601 * Looking for test storage... 00:11:50.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:50.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.601 --rc genhtml_branch_coverage=1 00:11:50.601 --rc genhtml_function_coverage=1 00:11:50.601 --rc genhtml_legend=1 00:11:50.601 --rc geninfo_all_blocks=1 00:11:50.601 --rc geninfo_unexecuted_blocks=1 00:11:50.601 00:11:50.601 ' 00:11:50.601 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:50.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.601 --rc genhtml_branch_coverage=1 00:11:50.601 --rc genhtml_function_coverage=1 00:11:50.602 --rc genhtml_legend=1 00:11:50.602 --rc geninfo_all_blocks=1 00:11:50.602 --rc geninfo_unexecuted_blocks=1 00:11:50.602 00:11:50.602 ' 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:50.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.602 --rc genhtml_branch_coverage=1 00:11:50.602 --rc genhtml_function_coverage=1 00:11:50.602 --rc genhtml_legend=1 00:11:50.602 --rc geninfo_all_blocks=1 00:11:50.602 --rc geninfo_unexecuted_blocks=1 00:11:50.602 00:11:50.602 ' 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:50.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.602 --rc genhtml_branch_coverage=1 00:11:50.602 --rc genhtml_function_coverage=1 00:11:50.602 --rc genhtml_legend=1 00:11:50.602 --rc geninfo_all_blocks=1 00:11:50.602 --rc geninfo_unexecuted_blocks=1 00:11:50.602 00:11:50.602 ' 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:50.602 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:50.603 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:50.603 #define SPDK_CONFIG_H 00:11:50.603 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:50.603 #define SPDK_CONFIG_APPS 1 00:11:50.603 #define SPDK_CONFIG_ARCH native 00:11:50.603 #define SPDK_CONFIG_ASAN 1 00:11:50.603 #undef SPDK_CONFIG_AVAHI 00:11:50.603 #undef SPDK_CONFIG_CET 00:11:50.603 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:50.603 #define SPDK_CONFIG_COVERAGE 1 00:11:50.603 #define SPDK_CONFIG_CROSS_PREFIX 00:11:50.603 #undef SPDK_CONFIG_CRYPTO 00:11:50.603 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:50.603 #undef SPDK_CONFIG_CUSTOMOCF 00:11:50.603 #undef SPDK_CONFIG_DAOS 00:11:50.603 #define SPDK_CONFIG_DAOS_DIR 00:11:50.603 #define SPDK_CONFIG_DEBUG 1 00:11:50.603 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:50.603 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:50.603 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:50.603 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:50.603 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:50.603 #undef SPDK_CONFIG_DPDK_UADK 00:11:50.603 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:50.603 #define SPDK_CONFIG_EXAMPLES 1 00:11:50.603 #undef SPDK_CONFIG_FC 00:11:50.603 #define SPDK_CONFIG_FC_PATH 00:11:50.603 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:50.603 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:50.603 #define SPDK_CONFIG_FSDEV 1 00:11:50.603 #undef SPDK_CONFIG_FUSE 00:11:50.603 #undef SPDK_CONFIG_FUZZER 00:11:50.603 #define SPDK_CONFIG_FUZZER_LIB 00:11:50.603 #undef SPDK_CONFIG_GOLANG 00:11:50.603 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:50.603 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:50.603 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:50.603 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:50.603 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:50.603 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:50.603 #undef SPDK_CONFIG_HAVE_LZ4 00:11:50.603 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:50.603 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:50.603 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:50.603 #define SPDK_CONFIG_IDXD 1 00:11:50.603 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:50.603 #undef SPDK_CONFIG_IPSEC_MB 00:11:50.603 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:50.603 #define SPDK_CONFIG_ISAL 1 00:11:50.603 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:50.603 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:50.603 #define SPDK_CONFIG_LIBDIR 00:11:50.603 #undef SPDK_CONFIG_LTO 00:11:50.603 #define SPDK_CONFIG_MAX_LCORES 128 00:11:50.603 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:50.603 #define SPDK_CONFIG_NVME_CUSE 1 00:11:50.603 #undef SPDK_CONFIG_OCF 00:11:50.603 #define SPDK_CONFIG_OCF_PATH 00:11:50.603 #define SPDK_CONFIG_OPENSSL_PATH 00:11:50.603 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:50.603 #define SPDK_CONFIG_PGO_DIR 00:11:50.603 #undef SPDK_CONFIG_PGO_USE 00:11:50.603 #define SPDK_CONFIG_PREFIX /usr/local 00:11:50.603 #undef SPDK_CONFIG_RAID5F 00:11:50.603 #undef SPDK_CONFIG_RBD 00:11:50.603 #define SPDK_CONFIG_RDMA 1 00:11:50.603 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:50.603 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:50.603 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:50.603 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:50.603 #define SPDK_CONFIG_SHARED 1 00:11:50.603 #undef SPDK_CONFIG_SMA 00:11:50.604 #define SPDK_CONFIG_TESTS 1 00:11:50.604 #undef SPDK_CONFIG_TSAN 00:11:50.604 #define SPDK_CONFIG_UBLK 1 00:11:50.604 #define SPDK_CONFIG_UBSAN 1 00:11:50.604 #undef SPDK_CONFIG_UNIT_TESTS 00:11:50.604 #undef SPDK_CONFIG_URING 00:11:50.604 #define SPDK_CONFIG_URING_PATH 00:11:50.604 #undef SPDK_CONFIG_URING_ZNS 00:11:50.604 #undef SPDK_CONFIG_USDT 00:11:50.604 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:50.604 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:50.604 #undef SPDK_CONFIG_VFIO_USER 00:11:50.604 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:50.604 #define SPDK_CONFIG_VHOST 1 00:11:50.604 #define SPDK_CONFIG_VIRTIO 1 00:11:50.604 #undef SPDK_CONFIG_VTUNE 00:11:50.604 #define SPDK_CONFIG_VTUNE_DIR 00:11:50.604 #define SPDK_CONFIG_WERROR 1 00:11:50.604 #define SPDK_CONFIG_WPDK_DIR 00:11:50.604 #undef SPDK_CONFIG_XNVME 00:11:50.604 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:50.604 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:50.605 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:50.606 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2921573 ]] 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2921573 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.2sb7OL 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.2sb7OL/tests/target /tmp/spdk.2sb7OL 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:50.607 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=55022419968 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988528128 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6966108160 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30982897664 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375269376 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22437888 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993854464 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=409600 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:50.608 * Looking for test storage... 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=55022419968 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9180700672 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:50.608 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:50.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.868 --rc genhtml_branch_coverage=1 00:11:50.868 --rc genhtml_function_coverage=1 00:11:50.868 --rc genhtml_legend=1 00:11:50.868 --rc geninfo_all_blocks=1 00:11:50.868 --rc geninfo_unexecuted_blocks=1 00:11:50.868 00:11:50.868 ' 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:50.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.868 --rc genhtml_branch_coverage=1 00:11:50.868 --rc genhtml_function_coverage=1 00:11:50.868 --rc genhtml_legend=1 00:11:50.868 --rc geninfo_all_blocks=1 00:11:50.868 --rc geninfo_unexecuted_blocks=1 00:11:50.868 00:11:50.868 ' 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:50.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.868 --rc genhtml_branch_coverage=1 00:11:50.868 --rc genhtml_function_coverage=1 00:11:50.868 --rc genhtml_legend=1 00:11:50.868 --rc geninfo_all_blocks=1 00:11:50.868 --rc geninfo_unexecuted_blocks=1 00:11:50.868 00:11:50.868 ' 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:50.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.868 --rc genhtml_branch_coverage=1 00:11:50.868 --rc genhtml_function_coverage=1 00:11:50.868 --rc genhtml_legend=1 00:11:50.868 --rc geninfo_all_blocks=1 00:11:50.868 --rc geninfo_unexecuted_blocks=1 00:11:50.868 00:11:50.868 ' 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.868 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:50.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:50.869 21:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:52.774 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.774 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:52.774 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:52.774 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:52.774 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:52.774 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:52.774 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:52.774 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:52.774 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:52.774 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:52.774 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:52.774 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:52.775 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:52.775 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:52.775 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:52.775 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.775 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:53.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:11:53.034 00:11:53.034 --- 10.0.0.2 ping statistics --- 00:11:53.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.034 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:11:53.034 00:11:53.034 --- 10.0.0.1 ping statistics --- 00:11:53.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.034 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:53.034 ************************************ 00:11:53.034 START TEST nvmf_filesystem_no_in_capsule 00:11:53.034 ************************************ 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2923334 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2923334 00:11:53.034 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2923334 ']' 00:11:53.035 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.035 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.035 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.035 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.035 21:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.035 [2024-11-19 21:01:26.814478] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:11:53.035 [2024-11-19 21:01:26.814617] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.293 [2024-11-19 21:01:26.965779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.552 [2024-11-19 21:01:27.106143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.552 [2024-11-19 21:01:27.106216] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.552 [2024-11-19 21:01:27.106241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.552 [2024-11-19 21:01:27.106265] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.552 [2024-11-19 21:01:27.106289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.552 [2024-11-19 21:01:27.109181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.552 [2024-11-19 21:01:27.109257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.552 [2024-11-19 21:01:27.109356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.552 [2024-11-19 21:01:27.109377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.119 21:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.119 21:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:54.119 21:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:54.119 21:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:54.119 21:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.119 21:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.119 21:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:54.119 21:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:54.119 21:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.119 21:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.119 [2024-11-19 21:01:27.810154] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.119 21:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.119 21:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:54.119 21:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.119 21:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.685 Malloc1 00:11:54.685 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.685 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.686 [2024-11-19 21:01:28.409293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:54.686 { 00:11:54.686 "name": "Malloc1", 00:11:54.686 "aliases": [ 00:11:54.686 "390683b7-7a26-4d7d-847d-d6a4a49a003c" 00:11:54.686 ], 00:11:54.686 "product_name": "Malloc disk", 00:11:54.686 "block_size": 512, 00:11:54.686 "num_blocks": 1048576, 00:11:54.686 "uuid": "390683b7-7a26-4d7d-847d-d6a4a49a003c", 00:11:54.686 "assigned_rate_limits": { 00:11:54.686 "rw_ios_per_sec": 0, 00:11:54.686 "rw_mbytes_per_sec": 0, 00:11:54.686 "r_mbytes_per_sec": 0, 00:11:54.686 "w_mbytes_per_sec": 0 00:11:54.686 }, 00:11:54.686 "claimed": true, 00:11:54.686 "claim_type": "exclusive_write", 00:11:54.686 "zoned": false, 00:11:54.686 "supported_io_types": { 00:11:54.686 "read": true, 00:11:54.686 "write": true, 00:11:54.686 "unmap": true, 00:11:54.686 "flush": true, 00:11:54.686 "reset": true, 00:11:54.686 "nvme_admin": false, 00:11:54.686 "nvme_io": false, 00:11:54.686 "nvme_io_md": false, 00:11:54.686 "write_zeroes": true, 00:11:54.686 "zcopy": true, 00:11:54.686 "get_zone_info": false, 00:11:54.686 "zone_management": false, 00:11:54.686 "zone_append": false, 00:11:54.686 "compare": false, 00:11:54.686 "compare_and_write": false, 00:11:54.686 "abort": true, 00:11:54.686 "seek_hole": false, 00:11:54.686 "seek_data": false, 00:11:54.686 "copy": true, 00:11:54.686 "nvme_iov_md": false 00:11:54.686 }, 00:11:54.686 "memory_domains": [ 00:11:54.686 { 00:11:54.686 "dma_device_id": "system", 00:11:54.686 "dma_device_type": 1 00:11:54.686 }, 00:11:54.686 { 00:11:54.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.686 "dma_device_type": 2 00:11:54.686 } 00:11:54.686 ], 00:11:54.686 "driver_specific": {} 00:11:54.686 } 00:11:54.686 ]' 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:54.686 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:54.944 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:54.944 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:54.944 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:54.945 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:54.945 21:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:55.510 21:01:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:55.510 21:01:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:55.510 21:01:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:55.510 21:01:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:55.510 21:01:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:57.411 21:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:57.411 21:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:57.411 21:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:57.411 21:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:57.411 21:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:57.411 21:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:57.411 21:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:57.411 21:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:57.411 21:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:57.411 21:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:57.411 21:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:57.411 21:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:57.411 21:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:57.411 21:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:57.411 21:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:57.411 21:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:57.411 21:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:57.669 21:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:58.234 21:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:59.167 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:59.167 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:59.167 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:59.167 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.167 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.167 ************************************ 00:11:59.167 START TEST filesystem_ext4 00:11:59.167 ************************************ 00:11:59.167 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:59.167 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:59.167 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:59.167 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:59.167 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:59.167 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:59.167 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:59.167 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:59.167 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:59.167 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:59.167 21:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:59.167 mke2fs 1.47.0 (5-Feb-2023) 00:11:59.167 Discarding device blocks: 0/522240 done 00:11:59.167 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:59.167 Filesystem UUID: aefdc341-8a86-45a0-bf26-07877bc49ae5 00:11:59.167 Superblock backups stored on blocks: 00:11:59.167 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:59.168 00:11:59.168 Allocating group tables: 0/64 done 00:11:59.168 Writing inode tables: 0/64 done 00:12:01.696 Creating journal (8192 blocks): done 00:12:01.696 Writing superblocks and filesystem accounting information: 0/64 done 00:12:01.696 00:12:01.696 21:01:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:01.696 21:01:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2923334 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:08.252 00:12:08.252 real 0m8.858s 00:12:08.252 user 0m0.020s 00:12:08.252 sys 0m0.066s 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:08.252 ************************************ 00:12:08.252 END TEST filesystem_ext4 00:12:08.252 ************************************ 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.252 ************************************ 00:12:08.252 START TEST filesystem_btrfs 00:12:08.252 ************************************ 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:08.252 btrfs-progs v6.8.1 00:12:08.252 See https://btrfs.readthedocs.io for more information. 00:12:08.252 00:12:08.252 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:08.252 NOTE: several default settings have changed in version 5.15, please make sure 00:12:08.252 this does not affect your deployments: 00:12:08.252 - DUP for metadata (-m dup) 00:12:08.252 - enabled no-holes (-O no-holes) 00:12:08.252 - enabled free-space-tree (-R free-space-tree) 00:12:08.252 00:12:08.252 Label: (null) 00:12:08.252 UUID: 085a7288-f946-453c-8c55-162b9e6549f1 00:12:08.252 Node size: 16384 00:12:08.252 Sector size: 4096 (CPU page size: 4096) 00:12:08.252 Filesystem size: 510.00MiB 00:12:08.252 Block group profiles: 00:12:08.252 Data: single 8.00MiB 00:12:08.252 Metadata: DUP 32.00MiB 00:12:08.252 System: DUP 8.00MiB 00:12:08.252 SSD detected: yes 00:12:08.252 Zoned device: no 00:12:08.252 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:08.252 Checksum: crc32c 00:12:08.252 Number of devices: 1 00:12:08.252 Devices: 00:12:08.252 ID SIZE PATH 00:12:08.252 1 510.00MiB /dev/nvme0n1p1 00:12:08.252 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:08.252 21:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2923334 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:09.187 00:12:09.187 real 0m1.082s 00:12:09.187 user 0m0.024s 00:12:09.187 sys 0m0.096s 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:09.187 ************************************ 00:12:09.187 END TEST filesystem_btrfs 00:12:09.187 ************************************ 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.187 ************************************ 00:12:09.187 START TEST filesystem_xfs 00:12:09.187 ************************************ 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:09.187 21:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:09.187 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:09.187 = sectsz=512 attr=2, projid32bit=1 00:12:09.187 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:09.187 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:09.187 data = bsize=4096 blocks=130560, imaxpct=25 00:12:09.187 = sunit=0 swidth=0 blks 00:12:09.187 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:09.187 log =internal log bsize=4096 blocks=16384, version=2 00:12:09.187 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:09.187 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:10.561 Discarding blocks...Done. 00:12:10.561 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:10.561 21:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:11.936 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:11.936 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:11.936 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:11.936 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:11.936 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:11.936 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:11.936 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2923334 00:12:11.936 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:11.936 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:11.936 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:11.936 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:12.200 00:12:12.200 real 0m2.927s 00:12:12.200 user 0m0.021s 00:12:12.200 sys 0m0.050s 00:12:12.200 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.200 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:12.200 ************************************ 00:12:12.200 END TEST filesystem_xfs 00:12:12.200 ************************************ 00:12:12.200 21:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:12.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2923334 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2923334 ']' 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2923334 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2923334 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2923334' 00:12:12.526 killing process with pid 2923334 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2923334 00:12:12.526 21:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2923334 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:15.086 00:12:15.086 real 0m21.843s 00:12:15.086 user 1m22.912s 00:12:15.086 sys 0m2.667s 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.086 ************************************ 00:12:15.086 END TEST nvmf_filesystem_no_in_capsule 00:12:15.086 ************************************ 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:15.086 ************************************ 00:12:15.086 START TEST nvmf_filesystem_in_capsule 00:12:15.086 ************************************ 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2926098 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2926098 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2926098 ']' 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.086 21:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.086 [2024-11-19 21:01:48.709368] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:12:15.086 [2024-11-19 21:01:48.709515] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.086 [2024-11-19 21:01:48.868057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.345 [2024-11-19 21:01:49.011868] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.345 [2024-11-19 21:01:49.011954] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.345 [2024-11-19 21:01:49.011979] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.345 [2024-11-19 21:01:49.012004] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.345 [2024-11-19 21:01:49.012024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.345 [2024-11-19 21:01:49.014880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.345 [2024-11-19 21:01:49.014949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.345 [2024-11-19 21:01:49.015060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.345 [2024-11-19 21:01:49.015065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.909 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.909 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:15.909 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:15.909 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:15.909 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.167 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.167 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:16.167 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:16.167 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.167 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.167 [2024-11-19 21:01:49.711470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.167 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.167 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:16.167 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.167 21:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.733 Malloc1 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.733 [2024-11-19 21:01:50.327934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.733 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:16.733 { 00:12:16.733 "name": "Malloc1", 00:12:16.733 "aliases": [ 00:12:16.733 "67be896b-3bdc-47a8-8365-2ba667d0ed9a" 00:12:16.733 ], 00:12:16.733 "product_name": "Malloc disk", 00:12:16.733 "block_size": 512, 00:12:16.733 "num_blocks": 1048576, 00:12:16.733 "uuid": "67be896b-3bdc-47a8-8365-2ba667d0ed9a", 00:12:16.733 "assigned_rate_limits": { 00:12:16.733 "rw_ios_per_sec": 0, 00:12:16.733 "rw_mbytes_per_sec": 0, 00:12:16.733 "r_mbytes_per_sec": 0, 00:12:16.733 "w_mbytes_per_sec": 0 00:12:16.733 }, 00:12:16.734 "claimed": true, 00:12:16.734 "claim_type": "exclusive_write", 00:12:16.734 "zoned": false, 00:12:16.734 "supported_io_types": { 00:12:16.734 "read": true, 00:12:16.734 "write": true, 00:12:16.734 "unmap": true, 00:12:16.734 "flush": true, 00:12:16.734 "reset": true, 00:12:16.734 "nvme_admin": false, 00:12:16.734 "nvme_io": false, 00:12:16.734 "nvme_io_md": false, 00:12:16.734 "write_zeroes": true, 00:12:16.734 "zcopy": true, 00:12:16.734 "get_zone_info": false, 00:12:16.734 "zone_management": false, 00:12:16.734 "zone_append": false, 00:12:16.734 "compare": false, 00:12:16.734 "compare_and_write": false, 00:12:16.734 "abort": true, 00:12:16.734 "seek_hole": false, 00:12:16.734 "seek_data": false, 00:12:16.734 "copy": true, 00:12:16.734 "nvme_iov_md": false 00:12:16.734 }, 00:12:16.734 "memory_domains": [ 00:12:16.734 { 00:12:16.734 "dma_device_id": "system", 00:12:16.734 "dma_device_type": 1 00:12:16.734 }, 00:12:16.734 { 00:12:16.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.734 "dma_device_type": 2 00:12:16.734 } 00:12:16.734 ], 00:12:16.734 "driver_specific": {} 00:12:16.734 } 00:12:16.734 ]' 00:12:16.734 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:16.734 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:16.734 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:16.734 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:16.734 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:16.734 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:16.734 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:16.734 21:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:17.300 21:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:17.300 21:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:17.300 21:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:17.300 21:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:17.300 21:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:19.824 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:19.824 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:19.824 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:19.824 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:19.824 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:19.824 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:19.824 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:19.824 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:19.824 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:19.824 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:19.824 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:19.824 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:19.824 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:19.824 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:19.824 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:19.824 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:19.824 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:19.824 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:20.081 21:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:21.013 21:01:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:21.013 21:01:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:21.013 21:01:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:21.013 21:01:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.013 21:01:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.272 ************************************ 00:12:21.272 START TEST filesystem_in_capsule_ext4 00:12:21.272 ************************************ 00:12:21.272 21:01:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:21.272 21:01:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:21.272 21:01:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:21.272 21:01:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:21.272 21:01:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:21.272 21:01:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:21.272 21:01:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:21.272 21:01:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:21.272 21:01:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:21.272 21:01:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:21.272 21:01:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:21.272 mke2fs 1.47.0 (5-Feb-2023) 00:12:21.272 Discarding device blocks: 0/522240 done 00:12:21.272 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:21.272 Filesystem UUID: 9f1e306b-3ebc-4515-9963-f36762934909 00:12:21.272 Superblock backups stored on blocks: 00:12:21.272 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:21.272 00:12:21.272 Allocating group tables: 0/64 done 00:12:21.272 Writing inode tables: 0/64 done 00:12:21.272 Creating journal (8192 blocks): done 00:12:21.272 Writing superblocks and filesystem accounting information: 0/64 done 00:12:21.272 00:12:21.272 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:21.272 21:01:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:27.848 21:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:27.848 21:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2926098 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:27.848 00:12:27.848 real 0m6.236s 00:12:27.848 user 0m0.014s 00:12:27.848 sys 0m0.070s 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:27.848 ************************************ 00:12:27.848 END TEST filesystem_in_capsule_ext4 00:12:27.848 ************************************ 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.848 ************************************ 00:12:27.848 START TEST filesystem_in_capsule_btrfs 00:12:27.848 ************************************ 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:27.848 btrfs-progs v6.8.1 00:12:27.848 See https://btrfs.readthedocs.io for more information. 00:12:27.848 00:12:27.848 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:27.848 NOTE: several default settings have changed in version 5.15, please make sure 00:12:27.848 this does not affect your deployments: 00:12:27.848 - DUP for metadata (-m dup) 00:12:27.848 - enabled no-holes (-O no-holes) 00:12:27.848 - enabled free-space-tree (-R free-space-tree) 00:12:27.848 00:12:27.848 Label: (null) 00:12:27.848 UUID: 7133c97b-84e6-42c3-b38f-6d4d752baa4b 00:12:27.848 Node size: 16384 00:12:27.848 Sector size: 4096 (CPU page size: 4096) 00:12:27.848 Filesystem size: 510.00MiB 00:12:27.848 Block group profiles: 00:12:27.848 Data: single 8.00MiB 00:12:27.848 Metadata: DUP 32.00MiB 00:12:27.848 System: DUP 8.00MiB 00:12:27.848 SSD detected: yes 00:12:27.848 Zoned device: no 00:12:27.848 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:27.848 Checksum: crc32c 00:12:27.848 Number of devices: 1 00:12:27.848 Devices: 00:12:27.848 ID SIZE PATH 00:12:27.848 1 510.00MiB /dev/nvme0n1p1 00:12:27.848 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:27.848 21:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:28.414 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:28.414 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:28.414 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:28.414 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:28.414 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:28.414 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2926098 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:28.673 00:12:28.673 real 0m1.130s 00:12:28.673 user 0m0.016s 00:12:28.673 sys 0m0.099s 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:28.673 ************************************ 00:12:28.673 END TEST filesystem_in_capsule_btrfs 00:12:28.673 ************************************ 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.673 ************************************ 00:12:28.673 START TEST filesystem_in_capsule_xfs 00:12:28.673 ************************************ 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:28.673 21:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:28.673 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:28.673 = sectsz=512 attr=2, projid32bit=1 00:12:28.673 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:28.673 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:28.673 data = bsize=4096 blocks=130560, imaxpct=25 00:12:28.673 = sunit=0 swidth=0 blks 00:12:28.673 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:28.673 log =internal log bsize=4096 blocks=16384, version=2 00:12:28.673 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:28.673 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:30.047 Discarding blocks...Done. 00:12:30.047 21:02:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:30.047 21:02:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2926098 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:31.947 00:12:31.947 real 0m3.055s 00:12:31.947 user 0m0.016s 00:12:31.947 sys 0m0.058s 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:31.947 ************************************ 00:12:31.947 END TEST filesystem_in_capsule_xfs 00:12:31.947 ************************************ 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2926098 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2926098 ']' 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2926098 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2926098 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2926098' 00:12:31.947 killing process with pid 2926098 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2926098 00:12:31.947 21:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2926098 00:12:34.480 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:34.480 00:12:34.480 real 0m19.367s 00:12:34.480 user 1m13.212s 00:12:34.480 sys 0m2.482s 00:12:34.480 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.480 21:02:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.480 ************************************ 00:12:34.480 END TEST nvmf_filesystem_in_capsule 00:12:34.480 ************************************ 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:34.480 rmmod nvme_tcp 00:12:34.480 rmmod nvme_fabrics 00:12:34.480 rmmod nvme_keyring 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.480 21:02:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.384 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:36.384 00:12:36.384 real 0m46.002s 00:12:36.384 user 2m37.221s 00:12:36.384 sys 0m6.854s 00:12:36.384 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.384 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:36.384 ************************************ 00:12:36.384 END TEST nvmf_filesystem 00:12:36.384 ************************************ 00:12:36.384 21:02:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:36.384 21:02:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:36.384 21:02:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.384 21:02:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:36.384 ************************************ 00:12:36.384 START TEST nvmf_target_discovery 00:12:36.384 ************************************ 00:12:36.384 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:36.643 * Looking for test storage... 00:12:36.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:36.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.643 --rc genhtml_branch_coverage=1 00:12:36.643 --rc genhtml_function_coverage=1 00:12:36.643 --rc genhtml_legend=1 00:12:36.643 --rc geninfo_all_blocks=1 00:12:36.643 --rc geninfo_unexecuted_blocks=1 00:12:36.643 00:12:36.643 ' 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:36.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.643 --rc genhtml_branch_coverage=1 00:12:36.643 --rc genhtml_function_coverage=1 00:12:36.643 --rc genhtml_legend=1 00:12:36.643 --rc geninfo_all_blocks=1 00:12:36.643 --rc geninfo_unexecuted_blocks=1 00:12:36.643 00:12:36.643 ' 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:36.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.643 --rc genhtml_branch_coverage=1 00:12:36.643 --rc genhtml_function_coverage=1 00:12:36.643 --rc genhtml_legend=1 00:12:36.643 --rc geninfo_all_blocks=1 00:12:36.643 --rc geninfo_unexecuted_blocks=1 00:12:36.643 00:12:36.643 ' 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:36.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.643 --rc genhtml_branch_coverage=1 00:12:36.643 --rc genhtml_function_coverage=1 00:12:36.643 --rc genhtml_legend=1 00:12:36.643 --rc geninfo_all_blocks=1 00:12:36.643 --rc geninfo_unexecuted_blocks=1 00:12:36.643 00:12:36.643 ' 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.643 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:36.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:36.644 21:02:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.551 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.551 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:38.551 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:38.551 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:38.551 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:38.551 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:38.551 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:38.551 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:38.551 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:38.551 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:38.551 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:38.551 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:38.551 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:38.551 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:38.551 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:38.551 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.551 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.552 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.552 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.552 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.552 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.552 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.552 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:38.552 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.552 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.552 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.552 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.552 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:38.552 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:38.552 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:38.552 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:38.552 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:38.552 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:38.552 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.552 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:38.552 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:38.552 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.552 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.552 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:38.812 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:38.812 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:38.812 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:38.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:12:38.812 00:12:38.812 --- 10.0.0.2 ping statistics --- 00:12:38.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.812 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:12:38.812 00:12:38.812 --- 10.0.0.1 ping statistics --- 00:12:38.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.812 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:38.812 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:38.813 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:38.813 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:38.813 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.813 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2930514 00:12:38.813 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:38.813 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2930514 00:12:38.813 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2930514 ']' 00:12:38.813 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.813 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.813 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.813 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.813 21:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.813 [2024-11-19 21:02:12.598197] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:12:38.813 [2024-11-19 21:02:12.598352] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.072 [2024-11-19 21:02:12.744130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.072 [2024-11-19 21:02:12.864275] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.072 [2024-11-19 21:02:12.864359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.072 [2024-11-19 21:02:12.864382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.072 [2024-11-19 21:02:12.864425] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.072 [2024-11-19 21:02:12.864442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.330 [2024-11-19 21:02:12.867248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.330 [2024-11-19 21:02:12.867295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.330 [2024-11-19 21:02:12.867339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.330 [2024-11-19 21:02:12.867346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.897 [2024-11-19 21:02:13.580200] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.897 Null1 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.897 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.897 [2024-11-19 21:02:13.634138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.898 Null2 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.898 Null3 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.898 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.156 Null4 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.156 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:40.415 00:12:40.415 Discovery Log Number of Records 6, Generation counter 6 00:12:40.415 =====Discovery Log Entry 0====== 00:12:40.415 trtype: tcp 00:12:40.415 adrfam: ipv4 00:12:40.415 subtype: current discovery subsystem 00:12:40.415 treq: not required 00:12:40.415 portid: 0 00:12:40.415 trsvcid: 4420 00:12:40.415 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:40.415 traddr: 10.0.0.2 00:12:40.415 eflags: explicit discovery connections, duplicate discovery information 00:12:40.415 sectype: none 00:12:40.415 =====Discovery Log Entry 1====== 00:12:40.415 trtype: tcp 00:12:40.415 adrfam: ipv4 00:12:40.415 subtype: nvme subsystem 00:12:40.415 treq: not required 00:12:40.415 portid: 0 00:12:40.415 trsvcid: 4420 00:12:40.415 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:40.415 traddr: 10.0.0.2 00:12:40.415 eflags: none 00:12:40.415 sectype: none 00:12:40.415 =====Discovery Log Entry 2====== 00:12:40.415 trtype: tcp 00:12:40.415 adrfam: ipv4 00:12:40.415 subtype: nvme subsystem 00:12:40.415 treq: not required 00:12:40.415 portid: 0 00:12:40.415 trsvcid: 4420 00:12:40.415 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:40.415 traddr: 10.0.0.2 00:12:40.415 eflags: none 00:12:40.415 sectype: none 00:12:40.415 =====Discovery Log Entry 3====== 00:12:40.415 trtype: tcp 00:12:40.415 adrfam: ipv4 00:12:40.415 subtype: nvme subsystem 00:12:40.415 treq: not required 00:12:40.415 portid: 0 00:12:40.415 trsvcid: 4420 00:12:40.415 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:40.415 traddr: 10.0.0.2 00:12:40.415 eflags: none 00:12:40.415 sectype: none 00:12:40.416 =====Discovery Log Entry 4====== 00:12:40.416 trtype: tcp 00:12:40.416 adrfam: ipv4 00:12:40.416 subtype: nvme subsystem 00:12:40.416 treq: not required 00:12:40.416 portid: 0 00:12:40.416 trsvcid: 4420 00:12:40.416 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:40.416 traddr: 10.0.0.2 00:12:40.416 eflags: none 00:12:40.416 sectype: none 00:12:40.416 =====Discovery Log Entry 5====== 00:12:40.416 trtype: tcp 00:12:40.416 adrfam: ipv4 00:12:40.416 subtype: discovery subsystem referral 00:12:40.416 treq: not required 00:12:40.416 portid: 0 00:12:40.416 trsvcid: 4430 00:12:40.416 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:40.416 traddr: 10.0.0.2 00:12:40.416 eflags: none 00:12:40.416 sectype: none 00:12:40.416 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:40.416 Perform nvmf subsystem discovery via RPC 00:12:40.416 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:40.416 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.416 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.416 [ 00:12:40.416 { 00:12:40.416 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:40.416 "subtype": "Discovery", 00:12:40.416 "listen_addresses": [ 00:12:40.416 { 00:12:40.416 "trtype": "TCP", 00:12:40.416 "adrfam": "IPv4", 00:12:40.416 "traddr": "10.0.0.2", 00:12:40.416 "trsvcid": "4420" 00:12:40.416 } 00:12:40.416 ], 00:12:40.416 "allow_any_host": true, 00:12:40.416 "hosts": [] 00:12:40.416 }, 00:12:40.416 { 00:12:40.416 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:40.416 "subtype": "NVMe", 00:12:40.416 "listen_addresses": [ 00:12:40.416 { 00:12:40.416 "trtype": "TCP", 00:12:40.416 "adrfam": "IPv4", 00:12:40.416 "traddr": "10.0.0.2", 00:12:40.416 "trsvcid": "4420" 00:12:40.416 } 00:12:40.416 ], 00:12:40.416 "allow_any_host": true, 00:12:40.416 "hosts": [], 00:12:40.416 "serial_number": "SPDK00000000000001", 00:12:40.416 "model_number": "SPDK bdev Controller", 00:12:40.416 "max_namespaces": 32, 00:12:40.416 "min_cntlid": 1, 00:12:40.416 "max_cntlid": 65519, 00:12:40.416 "namespaces": [ 00:12:40.416 { 00:12:40.416 "nsid": 1, 00:12:40.416 "bdev_name": "Null1", 00:12:40.416 "name": "Null1", 00:12:40.416 "nguid": "5D0E261E5581485CB6DA03DCBD5118CC", 00:12:40.416 "uuid": "5d0e261e-5581-485c-b6da-03dcbd5118cc" 00:12:40.416 } 00:12:40.416 ] 00:12:40.416 }, 00:12:40.416 { 00:12:40.416 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:40.416 "subtype": "NVMe", 00:12:40.416 "listen_addresses": [ 00:12:40.416 { 00:12:40.416 "trtype": "TCP", 00:12:40.416 "adrfam": "IPv4", 00:12:40.416 "traddr": "10.0.0.2", 00:12:40.416 "trsvcid": "4420" 00:12:40.416 } 00:12:40.416 ], 00:12:40.416 "allow_any_host": true, 00:12:40.416 "hosts": [], 00:12:40.416 "serial_number": "SPDK00000000000002", 00:12:40.416 "model_number": "SPDK bdev Controller", 00:12:40.416 "max_namespaces": 32, 00:12:40.416 "min_cntlid": 1, 00:12:40.416 "max_cntlid": 65519, 00:12:40.416 "namespaces": [ 00:12:40.416 { 00:12:40.416 "nsid": 1, 00:12:40.416 "bdev_name": "Null2", 00:12:40.416 "name": "Null2", 00:12:40.416 "nguid": "FC7FF978A37C4EF5849417083836C8E1", 00:12:40.416 "uuid": "fc7ff978-a37c-4ef5-8494-17083836c8e1" 00:12:40.416 } 00:12:40.416 ] 00:12:40.416 }, 00:12:40.416 { 00:12:40.416 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:40.416 "subtype": "NVMe", 00:12:40.416 "listen_addresses": [ 00:12:40.416 { 00:12:40.416 "trtype": "TCP", 00:12:40.416 "adrfam": "IPv4", 00:12:40.416 "traddr": "10.0.0.2", 00:12:40.416 "trsvcid": "4420" 00:12:40.416 } 00:12:40.416 ], 00:12:40.416 "allow_any_host": true, 00:12:40.416 "hosts": [], 00:12:40.416 "serial_number": "SPDK00000000000003", 00:12:40.416 "model_number": "SPDK bdev Controller", 00:12:40.416 "max_namespaces": 32, 00:12:40.416 "min_cntlid": 1, 00:12:40.416 "max_cntlid": 65519, 00:12:40.416 "namespaces": [ 00:12:40.416 { 00:12:40.416 "nsid": 1, 00:12:40.416 "bdev_name": "Null3", 00:12:40.416 "name": "Null3", 00:12:40.416 "nguid": "C3A9A63D9C294B15B497CF05DCFA2C3E", 00:12:40.416 "uuid": "c3a9a63d-9c29-4b15-b497-cf05dcfa2c3e" 00:12:40.416 } 00:12:40.416 ] 00:12:40.416 }, 00:12:40.416 { 00:12:40.416 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:40.416 "subtype": "NVMe", 00:12:40.416 "listen_addresses": [ 00:12:40.416 { 00:12:40.416 "trtype": "TCP", 00:12:40.416 "adrfam": "IPv4", 00:12:40.416 "traddr": "10.0.0.2", 00:12:40.416 "trsvcid": "4420" 00:12:40.416 } 00:12:40.416 ], 00:12:40.416 "allow_any_host": true, 00:12:40.416 "hosts": [], 00:12:40.416 "serial_number": "SPDK00000000000004", 00:12:40.416 "model_number": "SPDK bdev Controller", 00:12:40.416 "max_namespaces": 32, 00:12:40.416 "min_cntlid": 1, 00:12:40.416 "max_cntlid": 65519, 00:12:40.416 "namespaces": [ 00:12:40.416 { 00:12:40.416 "nsid": 1, 00:12:40.416 "bdev_name": "Null4", 00:12:40.416 "name": "Null4", 00:12:40.416 "nguid": "B288907F847045498EE52932979E221E", 00:12:40.416 "uuid": "b288907f-8470-4549-8ee5-2932979e221e" 00:12:40.416 } 00:12:40.416 ] 00:12:40.416 } 00:12:40.416 ] 00:12:40.416 21:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.416 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:40.416 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:40.416 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.416 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.416 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.416 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.416 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:40.416 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.416 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.416 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.416 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:40.416 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:40.416 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:40.417 rmmod nvme_tcp 00:12:40.417 rmmod nvme_fabrics 00:12:40.417 rmmod nvme_keyring 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2930514 ']' 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2930514 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2930514 ']' 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2930514 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.417 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2930514 00:12:40.711 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.711 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.711 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2930514' 00:12:40.711 killing process with pid 2930514 00:12:40.711 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2930514 00:12:40.711 21:02:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2930514 00:12:41.710 21:02:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:41.710 21:02:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:41.710 21:02:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:41.710 21:02:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:41.710 21:02:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:41.710 21:02:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:41.710 21:02:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:41.710 21:02:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:41.710 21:02:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:41.710 21:02:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.710 21:02:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.710 21:02:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.614 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:43.614 00:12:43.614 real 0m7.216s 00:12:43.614 user 0m9.806s 00:12:43.614 sys 0m2.083s 00:12:43.614 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.614 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.614 ************************************ 00:12:43.614 END TEST nvmf_target_discovery 00:12:43.614 ************************************ 00:12:43.614 21:02:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:43.614 21:02:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:43.614 21:02:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.614 21:02:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:43.873 ************************************ 00:12:43.873 START TEST nvmf_referrals 00:12:43.873 ************************************ 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:43.873 * Looking for test storage... 00:12:43.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:43.873 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:43.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.874 --rc genhtml_branch_coverage=1 00:12:43.874 --rc genhtml_function_coverage=1 00:12:43.874 --rc genhtml_legend=1 00:12:43.874 --rc geninfo_all_blocks=1 00:12:43.874 --rc geninfo_unexecuted_blocks=1 00:12:43.874 00:12:43.874 ' 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:43.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.874 --rc genhtml_branch_coverage=1 00:12:43.874 --rc genhtml_function_coverage=1 00:12:43.874 --rc genhtml_legend=1 00:12:43.874 --rc geninfo_all_blocks=1 00:12:43.874 --rc geninfo_unexecuted_blocks=1 00:12:43.874 00:12:43.874 ' 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:43.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.874 --rc genhtml_branch_coverage=1 00:12:43.874 --rc genhtml_function_coverage=1 00:12:43.874 --rc genhtml_legend=1 00:12:43.874 --rc geninfo_all_blocks=1 00:12:43.874 --rc geninfo_unexecuted_blocks=1 00:12:43.874 00:12:43.874 ' 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:43.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.874 --rc genhtml_branch_coverage=1 00:12:43.874 --rc genhtml_function_coverage=1 00:12:43.874 --rc genhtml_legend=1 00:12:43.874 --rc geninfo_all_blocks=1 00:12:43.874 --rc geninfo_unexecuted_blocks=1 00:12:43.874 00:12:43.874 ' 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:43.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:43.874 21:02:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:46.409 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:46.409 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.409 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:46.409 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:46.410 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:46.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:12:46.410 00:12:46.410 --- 10.0.0.2 ping statistics --- 00:12:46.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.410 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:12:46.410 00:12:46.410 --- 10.0.0.1 ping statistics --- 00:12:46.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.410 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2932750 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2932750 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2932750 ']' 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.410 21:02:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.410 [2024-11-19 21:02:20.012802] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:12:46.410 [2024-11-19 21:02:20.012952] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.410 [2024-11-19 21:02:20.170954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.668 [2024-11-19 21:02:20.315496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.668 [2024-11-19 21:02:20.315568] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.668 [2024-11-19 21:02:20.315594] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.668 [2024-11-19 21:02:20.315618] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.668 [2024-11-19 21:02:20.315637] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.668 [2024-11-19 21:02:20.318371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.668 [2024-11-19 21:02:20.318435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.668 [2024-11-19 21:02:20.318470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.668 [2024-11-19 21:02:20.318478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.602 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:47.602 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:47.602 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:47.602 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:47.602 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.602 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.602 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:47.602 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.602 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.602 [2024-11-19 21:02:21.059410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.602 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.602 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:47.602 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.602 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.602 [2024-11-19 21:02:21.083220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:47.602 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.602 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:47.602 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:47.603 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:47.861 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:48.119 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:48.378 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:48.378 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:48.378 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:48.378 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:48.378 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:48.378 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:48.378 21:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:48.378 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:48.378 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:48.378 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:48.378 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:48.378 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:48.378 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:48.637 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:48.637 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:48.637 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.637 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.637 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.637 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:48.637 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:48.637 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:48.637 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.637 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:48.637 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.637 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:48.637 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.637 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:48.637 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:48.637 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:48.637 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:48.637 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:48.637 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:48.637 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:48.637 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:48.895 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:48.895 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:48.895 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:48.895 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:48.895 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:48.895 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:48.895 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:49.152 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:49.152 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:49.152 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:49.152 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:49.152 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:49.152 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:49.152 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:49.152 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:49.152 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.152 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.152 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.410 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:49.410 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:49.410 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.410 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:49.410 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.410 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:49.410 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:49.410 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:49.410 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:49.410 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:49.410 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:49.410 21:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:49.410 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:49.410 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:49.410 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:49.410 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:49.410 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:49.410 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:49.410 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:49.410 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:49.410 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:49.410 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:49.410 rmmod nvme_tcp 00:12:49.668 rmmod nvme_fabrics 00:12:49.668 rmmod nvme_keyring 00:12:49.668 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:49.669 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:49.669 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:49.669 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2932750 ']' 00:12:49.669 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2932750 00:12:49.669 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2932750 ']' 00:12:49.669 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2932750 00:12:49.669 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:49.669 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:49.669 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2932750 00:12:49.669 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:49.669 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:49.669 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2932750' 00:12:49.669 killing process with pid 2932750 00:12:49.669 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2932750 00:12:49.669 21:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2932750 00:12:50.603 21:02:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:50.861 21:02:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:50.861 21:02:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:50.861 21:02:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:50.861 21:02:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:50.861 21:02:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:50.861 21:02:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:50.861 21:02:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:50.861 21:02:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:50.861 21:02:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.861 21:02:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.861 21:02:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.767 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:52.767 00:12:52.767 real 0m9.019s 00:12:52.767 user 0m17.002s 00:12:52.767 sys 0m2.568s 00:12:52.767 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.767 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.767 ************************************ 00:12:52.767 END TEST nvmf_referrals 00:12:52.767 ************************************ 00:12:52.767 21:02:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:52.767 21:02:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:52.767 21:02:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.767 21:02:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:52.767 ************************************ 00:12:52.767 START TEST nvmf_connect_disconnect 00:12:52.767 ************************************ 00:12:52.767 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:52.767 * Looking for test storage... 00:12:52.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.767 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:52.767 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:52.767 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:53.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.027 --rc genhtml_branch_coverage=1 00:12:53.027 --rc genhtml_function_coverage=1 00:12:53.027 --rc genhtml_legend=1 00:12:53.027 --rc geninfo_all_blocks=1 00:12:53.027 --rc geninfo_unexecuted_blocks=1 00:12:53.027 00:12:53.027 ' 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:53.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.027 --rc genhtml_branch_coverage=1 00:12:53.027 --rc genhtml_function_coverage=1 00:12:53.027 --rc genhtml_legend=1 00:12:53.027 --rc geninfo_all_blocks=1 00:12:53.027 --rc geninfo_unexecuted_blocks=1 00:12:53.027 00:12:53.027 ' 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:53.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.027 --rc genhtml_branch_coverage=1 00:12:53.027 --rc genhtml_function_coverage=1 00:12:53.027 --rc genhtml_legend=1 00:12:53.027 --rc geninfo_all_blocks=1 00:12:53.027 --rc geninfo_unexecuted_blocks=1 00:12:53.027 00:12:53.027 ' 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:53.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.027 --rc genhtml_branch_coverage=1 00:12:53.027 --rc genhtml_function_coverage=1 00:12:53.027 --rc genhtml_legend=1 00:12:53.027 --rc geninfo_all_blocks=1 00:12:53.027 --rc geninfo_unexecuted_blocks=1 00:12:53.027 00:12:53.027 ' 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:53.027 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:53.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:53.028 21:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:55.567 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:55.567 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:55.568 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:55.568 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:55.568 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:55.568 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:55.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:55.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:12:55.569 00:12:55.569 --- 10.0.0.2 ping statistics --- 00:12:55.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.569 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:55.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:55.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:12:55.569 00:12:55.569 --- 10.0.0.1 ping statistics --- 00:12:55.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.569 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2935319 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2935319 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2935319 ']' 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.569 21:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:55.569 [2024-11-19 21:02:29.069913] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:12:55.569 [2024-11-19 21:02:29.070125] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.569 [2024-11-19 21:02:29.218736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:55.569 [2024-11-19 21:02:29.357989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.569 [2024-11-19 21:02:29.358077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.569 [2024-11-19 21:02:29.358105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.569 [2024-11-19 21:02:29.358139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.569 [2024-11-19 21:02:29.358158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.841 [2024-11-19 21:02:29.361084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.841 [2024-11-19 21:02:29.361147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.841 [2024-11-19 21:02:29.361178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.841 [2024-11-19 21:02:29.361184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.407 [2024-11-19 21:02:30.056336] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.407 [2024-11-19 21:02:30.179420] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:56.407 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:56.408 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:56.408 21:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:58.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.634 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:52.634 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:52.634 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:52.634 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:52.634 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:52.634 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:52.634 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:52.634 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:52.634 rmmod nvme_tcp 00:16:52.634 rmmod nvme_fabrics 00:16:52.634 rmmod nvme_keyring 00:16:52.634 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:52.634 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:52.634 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:52.634 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2935319 ']' 00:16:52.634 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2935319 00:16:52.634 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2935319 ']' 00:16:52.634 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2935319 00:16:52.634 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:52.635 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:52.635 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2935319 00:16:52.635 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:52.635 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:52.635 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2935319' 00:16:52.635 killing process with pid 2935319 00:16:52.635 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2935319 00:16:52.635 21:06:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2935319 00:16:54.010 21:06:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:54.010 21:06:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:54.010 21:06:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:54.010 21:06:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:54.010 21:06:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:54.010 21:06:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:54.010 21:06:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:54.010 21:06:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:54.010 21:06:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:54.010 21:06:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.010 21:06:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:54.010 21:06:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.914 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:55.914 00:16:55.914 real 4m3.138s 00:16:55.914 user 15m18.979s 00:16:55.914 sys 0m39.888s 00:16:55.914 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.914 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:55.914 ************************************ 00:16:55.914 END TEST nvmf_connect_disconnect 00:16:55.914 ************************************ 00:16:55.914 21:06:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:55.914 21:06:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:55.914 21:06:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.914 21:06:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:55.914 ************************************ 00:16:55.914 START TEST nvmf_multitarget 00:16:55.914 ************************************ 00:16:55.914 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:56.173 * Looking for test storage... 00:16:56.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:56.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.173 --rc genhtml_branch_coverage=1 00:16:56.173 --rc genhtml_function_coverage=1 00:16:56.173 --rc genhtml_legend=1 00:16:56.173 --rc geninfo_all_blocks=1 00:16:56.173 --rc geninfo_unexecuted_blocks=1 00:16:56.173 00:16:56.173 ' 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:56.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.173 --rc genhtml_branch_coverage=1 00:16:56.173 --rc genhtml_function_coverage=1 00:16:56.173 --rc genhtml_legend=1 00:16:56.173 --rc geninfo_all_blocks=1 00:16:56.173 --rc geninfo_unexecuted_blocks=1 00:16:56.173 00:16:56.173 ' 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:56.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.173 --rc genhtml_branch_coverage=1 00:16:56.173 --rc genhtml_function_coverage=1 00:16:56.173 --rc genhtml_legend=1 00:16:56.173 --rc geninfo_all_blocks=1 00:16:56.173 --rc geninfo_unexecuted_blocks=1 00:16:56.173 00:16:56.173 ' 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:56.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.173 --rc genhtml_branch_coverage=1 00:16:56.173 --rc genhtml_function_coverage=1 00:16:56.173 --rc genhtml_legend=1 00:16:56.173 --rc geninfo_all_blocks=1 00:16:56.173 --rc geninfo_unexecuted_blocks=1 00:16:56.173 00:16:56.173 ' 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:56.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:56.173 21:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:58.705 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.705 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:58.706 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:58.706 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:58.706 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:58.706 21:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:58.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:58.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:16:58.706 00:16:58.706 --- 10.0.0.2 ping statistics --- 00:16:58.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.706 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:58.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:58.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:16:58.706 00:16:58.706 --- 10.0.0.1 ping statistics --- 00:16:58.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.706 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2967827 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2967827 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2967827 ']' 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.706 21:06:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:58.706 [2024-11-19 21:06:32.190789] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:16:58.706 [2024-11-19 21:06:32.190924] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.706 [2024-11-19 21:06:32.346777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:58.706 [2024-11-19 21:06:32.489946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.706 [2024-11-19 21:06:32.490025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.706 [2024-11-19 21:06:32.490051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.706 [2024-11-19 21:06:32.490084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.706 [2024-11-19 21:06:32.490106] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.706 [2024-11-19 21:06:32.493019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.706 [2024-11-19 21:06:32.493099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.706 [2024-11-19 21:06:32.493190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.706 [2024-11-19 21:06:32.493196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:59.642 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.642 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:59.642 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:59.642 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:59.642 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:59.642 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.642 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:59.642 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:59.642 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:59.642 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:59.642 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:59.642 "nvmf_tgt_1" 00:16:59.642 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:59.899 "nvmf_tgt_2" 00:16:59.899 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:59.899 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:59.900 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:59.900 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:00.157 true 00:17:00.157 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:00.157 true 00:17:00.157 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:00.157 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:00.415 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:00.415 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:00.415 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:00.415 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:00.415 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:00.415 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:00.415 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:00.415 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:00.415 21:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:00.415 rmmod nvme_tcp 00:17:00.415 rmmod nvme_fabrics 00:17:00.415 rmmod nvme_keyring 00:17:00.415 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:00.415 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:00.415 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:00.415 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2967827 ']' 00:17:00.415 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2967827 00:17:00.415 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2967827 ']' 00:17:00.415 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2967827 00:17:00.415 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:17:00.415 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.415 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2967827 00:17:00.415 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.415 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.415 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2967827' 00:17:00.415 killing process with pid 2967827 00:17:00.415 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2967827 00:17:00.415 21:06:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2967827 00:17:01.790 21:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:01.790 21:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:01.790 21:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:01.790 21:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:01.790 21:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:17:01.790 21:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:01.790 21:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:17:01.790 21:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:01.790 21:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:01.790 21:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.790 21:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.790 21:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:03.693 00:17:03.693 real 0m7.518s 00:17:03.693 user 0m11.835s 00:17:03.693 sys 0m2.171s 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:03.693 ************************************ 00:17:03.693 END TEST nvmf_multitarget 00:17:03.693 ************************************ 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:03.693 ************************************ 00:17:03.693 START TEST nvmf_rpc 00:17:03.693 ************************************ 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:03.693 * Looking for test storage... 00:17:03.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:03.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.693 --rc genhtml_branch_coverage=1 00:17:03.693 --rc genhtml_function_coverage=1 00:17:03.693 --rc genhtml_legend=1 00:17:03.693 --rc geninfo_all_blocks=1 00:17:03.693 --rc geninfo_unexecuted_blocks=1 00:17:03.693 00:17:03.693 ' 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:03.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.693 --rc genhtml_branch_coverage=1 00:17:03.693 --rc genhtml_function_coverage=1 00:17:03.693 --rc genhtml_legend=1 00:17:03.693 --rc geninfo_all_blocks=1 00:17:03.693 --rc geninfo_unexecuted_blocks=1 00:17:03.693 00:17:03.693 ' 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:03.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.693 --rc genhtml_branch_coverage=1 00:17:03.693 --rc genhtml_function_coverage=1 00:17:03.693 --rc genhtml_legend=1 00:17:03.693 --rc geninfo_all_blocks=1 00:17:03.693 --rc geninfo_unexecuted_blocks=1 00:17:03.693 00:17:03.693 ' 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:03.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.693 --rc genhtml_branch_coverage=1 00:17:03.693 --rc genhtml_function_coverage=1 00:17:03.693 --rc genhtml_legend=1 00:17:03.693 --rc geninfo_all_blocks=1 00:17:03.693 --rc geninfo_unexecuted_blocks=1 00:17:03.693 00:17:03.693 ' 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.693 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:03.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:03.694 21:06:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:06.224 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:06.224 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:06.224 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:06.224 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:06.224 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:06.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:17:06.225 00:17:06.225 --- 10.0.0.2 ping statistics --- 00:17:06.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.225 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:06.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:17:06.225 00:17:06.225 --- 10.0.0.1 ping statistics --- 00:17:06.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.225 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2970177 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2970177 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2970177 ']' 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.225 21:06:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.225 [2024-11-19 21:06:39.672294] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:06.225 [2024-11-19 21:06:39.672454] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.225 [2024-11-19 21:06:39.819626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.225 [2024-11-19 21:06:39.958896] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.225 [2024-11-19 21:06:39.958974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.225 [2024-11-19 21:06:39.958999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.225 [2024-11-19 21:06:39.959022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.225 [2024-11-19 21:06:39.959041] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.225 [2024-11-19 21:06:39.961864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.225 [2024-11-19 21:06:39.961935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.225 [2024-11-19 21:06:39.962032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.225 [2024-11-19 21:06:39.962037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:07.158 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.158 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:07.158 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:07.158 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:07.158 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.158 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.158 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:07.158 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.158 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.158 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.158 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:07.158 "tick_rate": 2700000000, 00:17:07.158 "poll_groups": [ 00:17:07.158 { 00:17:07.158 "name": "nvmf_tgt_poll_group_000", 00:17:07.158 "admin_qpairs": 0, 00:17:07.158 "io_qpairs": 0, 00:17:07.158 "current_admin_qpairs": 0, 00:17:07.158 "current_io_qpairs": 0, 00:17:07.158 "pending_bdev_io": 0, 00:17:07.158 "completed_nvme_io": 0, 00:17:07.158 "transports": [] 00:17:07.158 }, 00:17:07.158 { 00:17:07.158 "name": "nvmf_tgt_poll_group_001", 00:17:07.158 "admin_qpairs": 0, 00:17:07.158 "io_qpairs": 0, 00:17:07.158 "current_admin_qpairs": 0, 00:17:07.158 "current_io_qpairs": 0, 00:17:07.158 "pending_bdev_io": 0, 00:17:07.158 "completed_nvme_io": 0, 00:17:07.158 "transports": [] 00:17:07.158 }, 00:17:07.158 { 00:17:07.158 "name": "nvmf_tgt_poll_group_002", 00:17:07.158 "admin_qpairs": 0, 00:17:07.159 "io_qpairs": 0, 00:17:07.159 "current_admin_qpairs": 0, 00:17:07.159 "current_io_qpairs": 0, 00:17:07.159 "pending_bdev_io": 0, 00:17:07.159 "completed_nvme_io": 0, 00:17:07.159 "transports": [] 00:17:07.159 }, 00:17:07.159 { 00:17:07.159 "name": "nvmf_tgt_poll_group_003", 00:17:07.159 "admin_qpairs": 0, 00:17:07.159 "io_qpairs": 0, 00:17:07.159 "current_admin_qpairs": 0, 00:17:07.159 "current_io_qpairs": 0, 00:17:07.159 "pending_bdev_io": 0, 00:17:07.159 "completed_nvme_io": 0, 00:17:07.159 "transports": [] 00:17:07.159 } 00:17:07.159 ] 00:17:07.159 }' 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.159 [2024-11-19 21:06:40.752697] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:07.159 "tick_rate": 2700000000, 00:17:07.159 "poll_groups": [ 00:17:07.159 { 00:17:07.159 "name": "nvmf_tgt_poll_group_000", 00:17:07.159 "admin_qpairs": 0, 00:17:07.159 "io_qpairs": 0, 00:17:07.159 "current_admin_qpairs": 0, 00:17:07.159 "current_io_qpairs": 0, 00:17:07.159 "pending_bdev_io": 0, 00:17:07.159 "completed_nvme_io": 0, 00:17:07.159 "transports": [ 00:17:07.159 { 00:17:07.159 "trtype": "TCP" 00:17:07.159 } 00:17:07.159 ] 00:17:07.159 }, 00:17:07.159 { 00:17:07.159 "name": "nvmf_tgt_poll_group_001", 00:17:07.159 "admin_qpairs": 0, 00:17:07.159 "io_qpairs": 0, 00:17:07.159 "current_admin_qpairs": 0, 00:17:07.159 "current_io_qpairs": 0, 00:17:07.159 "pending_bdev_io": 0, 00:17:07.159 "completed_nvme_io": 0, 00:17:07.159 "transports": [ 00:17:07.159 { 00:17:07.159 "trtype": "TCP" 00:17:07.159 } 00:17:07.159 ] 00:17:07.159 }, 00:17:07.159 { 00:17:07.159 "name": "nvmf_tgt_poll_group_002", 00:17:07.159 "admin_qpairs": 0, 00:17:07.159 "io_qpairs": 0, 00:17:07.159 "current_admin_qpairs": 0, 00:17:07.159 "current_io_qpairs": 0, 00:17:07.159 "pending_bdev_io": 0, 00:17:07.159 "completed_nvme_io": 0, 00:17:07.159 "transports": [ 00:17:07.159 { 00:17:07.159 "trtype": "TCP" 00:17:07.159 } 00:17:07.159 ] 00:17:07.159 }, 00:17:07.159 { 00:17:07.159 "name": "nvmf_tgt_poll_group_003", 00:17:07.159 "admin_qpairs": 0, 00:17:07.159 "io_qpairs": 0, 00:17:07.159 "current_admin_qpairs": 0, 00:17:07.159 "current_io_qpairs": 0, 00:17:07.159 "pending_bdev_io": 0, 00:17:07.159 "completed_nvme_io": 0, 00:17:07.159 "transports": [ 00:17:07.159 { 00:17:07.159 "trtype": "TCP" 00:17:07.159 } 00:17:07.159 ] 00:17:07.159 } 00:17:07.159 ] 00:17:07.159 }' 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.159 Malloc1 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.159 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.417 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.417 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.417 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.417 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.417 [2024-11-19 21:06:40.962297] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.417 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.417 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:07.417 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:07.417 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:07.417 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:07.417 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.417 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:07.418 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.418 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:07.418 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.418 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:07.418 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:07.418 21:06:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:07.418 [2024-11-19 21:06:40.988294] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:17:07.418 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:07.418 could not add new controller: failed to write to nvme-fabrics device 00:17:07.418 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:07.418 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:07.418 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:07.418 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:07.418 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.418 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.418 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.418 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.418 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:07.984 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:07.984 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:07.984 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:07.984 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:07.984 21:06:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:09.882 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:09.882 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:09.882 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:09.882 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:09.882 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:09.882 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:09.882 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:10.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:10.140 [2024-11-19 21:06:43.789539] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:17:10.140 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:10.140 could not add new controller: failed to write to nvme-fabrics device 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.140 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:10.753 21:06:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:10.753 21:06:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:10.753 21:06:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:10.753 21:06:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:10.753 21:06:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:12.698 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:12.698 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:12.698 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:12.698 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:12.698 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:12.698 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:12.698 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:12.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.956 [2024-11-19 21:06:46.660754] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.956 21:06:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:13.522 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:13.522 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:13.522 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:13.522 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:13.522 21:06:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:16.049 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:16.049 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:16.049 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:16.049 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:16.049 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:16.049 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:16.049 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:16.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.049 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:16.049 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:16.049 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:16.049 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:16.049 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:16.049 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:16.049 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:16.049 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:16.049 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.049 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.049 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.050 [2024-11-19 21:06:49.471163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.050 21:06:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:16.616 21:06:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:16.616 21:06:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:16.616 21:06:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:16.616 21:06:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:16.616 21:06:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:18.515 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:18.515 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:18.515 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:18.515 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:18.515 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:18.515 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:18.515 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:18.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.773 [2024-11-19 21:06:52.467304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.773 21:06:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:19.707 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:19.707 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:19.708 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:19.708 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:19.708 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:21.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.665 [2024-11-19 21:06:55.339778] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.665 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:22.231 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:22.231 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:22.231 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:22.231 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:22.231 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:24.758 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:24.758 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:24.758 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:24.758 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:24.758 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:24.758 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:24.758 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:24.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.758 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:24.758 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:24.758 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:24.758 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:24.758 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:24.758 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:24.758 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:24.758 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:24.758 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.758 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.758 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.758 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:24.758 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.758 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.758 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.758 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:24.758 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:24.759 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.759 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.759 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.759 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:24.759 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.759 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.759 [2024-11-19 21:06:58.127397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.759 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.759 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:24.759 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.759 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.759 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.759 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:24.759 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.759 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.759 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.759 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:25.325 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:25.325 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:25.325 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:25.325 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:25.325 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:27.233 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:27.233 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:27.233 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:27.233 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:27.233 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:27.233 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:27.233 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:27.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.233 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:27.233 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:27.233 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:27.233 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:27.233 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:27.233 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:27.233 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:27.233 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:27.233 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.234 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.234 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.234 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.234 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.234 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.234 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.234 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:27.234 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:27.234 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:27.234 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.234 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.234 21:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.234 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.234 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.234 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.234 [2024-11-19 21:07:01.005084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.234 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.234 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:27.234 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.234 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.234 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.234 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:27.234 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.234 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.234 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.234 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:27.234 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.234 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.492 [2024-11-19 21:07:01.053164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.492 [2024-11-19 21:07:01.101518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.492 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.493 [2024-11-19 21:07:01.149462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.493 [2024-11-19 21:07:01.197640] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:27.493 "tick_rate": 2700000000, 00:17:27.493 "poll_groups": [ 00:17:27.493 { 00:17:27.493 "name": "nvmf_tgt_poll_group_000", 00:17:27.493 "admin_qpairs": 2, 00:17:27.493 "io_qpairs": 84, 00:17:27.493 "current_admin_qpairs": 0, 00:17:27.493 "current_io_qpairs": 0, 00:17:27.493 "pending_bdev_io": 0, 00:17:27.493 "completed_nvme_io": 85, 00:17:27.493 "transports": [ 00:17:27.493 { 00:17:27.493 "trtype": "TCP" 00:17:27.493 } 00:17:27.493 ] 00:17:27.493 }, 00:17:27.493 { 00:17:27.493 "name": "nvmf_tgt_poll_group_001", 00:17:27.493 "admin_qpairs": 2, 00:17:27.493 "io_qpairs": 84, 00:17:27.493 "current_admin_qpairs": 0, 00:17:27.493 "current_io_qpairs": 0, 00:17:27.493 "pending_bdev_io": 0, 00:17:27.493 "completed_nvme_io": 184, 00:17:27.493 "transports": [ 00:17:27.493 { 00:17:27.493 "trtype": "TCP" 00:17:27.493 } 00:17:27.493 ] 00:17:27.493 }, 00:17:27.493 { 00:17:27.493 "name": "nvmf_tgt_poll_group_002", 00:17:27.493 "admin_qpairs": 1, 00:17:27.493 "io_qpairs": 84, 00:17:27.493 "current_admin_qpairs": 0, 00:17:27.493 "current_io_qpairs": 0, 00:17:27.493 "pending_bdev_io": 0, 00:17:27.493 "completed_nvme_io": 282, 00:17:27.493 "transports": [ 00:17:27.493 { 00:17:27.493 "trtype": "TCP" 00:17:27.493 } 00:17:27.493 ] 00:17:27.493 }, 00:17:27.493 { 00:17:27.493 "name": "nvmf_tgt_poll_group_003", 00:17:27.493 "admin_qpairs": 2, 00:17:27.493 "io_qpairs": 84, 00:17:27.493 "current_admin_qpairs": 0, 00:17:27.493 "current_io_qpairs": 0, 00:17:27.493 "pending_bdev_io": 0, 00:17:27.493 "completed_nvme_io": 135, 00:17:27.493 "transports": [ 00:17:27.493 { 00:17:27.493 "trtype": "TCP" 00:17:27.493 } 00:17:27.493 ] 00:17:27.493 } 00:17:27.493 ] 00:17:27.493 }' 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:27.493 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:27.752 rmmod nvme_tcp 00:17:27.752 rmmod nvme_fabrics 00:17:27.752 rmmod nvme_keyring 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2970177 ']' 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2970177 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2970177 ']' 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2970177 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2970177 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2970177' 00:17:27.752 killing process with pid 2970177 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2970177 00:17:27.752 21:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2970177 00:17:29.127 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:29.127 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:29.127 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:29.127 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:29.127 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:29.127 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:29.127 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:29.127 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:29.127 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:29.127 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.127 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.127 21:07:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.033 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:31.033 00:17:31.033 real 0m27.490s 00:17:31.033 user 1m28.938s 00:17:31.033 sys 0m4.573s 00:17:31.033 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.033 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.033 ************************************ 00:17:31.033 END TEST nvmf_rpc 00:17:31.033 ************************************ 00:17:31.033 21:07:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:31.033 21:07:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:31.033 21:07:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.033 21:07:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:31.033 ************************************ 00:17:31.033 START TEST nvmf_invalid 00:17:31.033 ************************************ 00:17:31.033 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:31.293 * Looking for test storage... 00:17:31.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:31.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.293 --rc genhtml_branch_coverage=1 00:17:31.293 --rc genhtml_function_coverage=1 00:17:31.293 --rc genhtml_legend=1 00:17:31.293 --rc geninfo_all_blocks=1 00:17:31.293 --rc geninfo_unexecuted_blocks=1 00:17:31.293 00:17:31.293 ' 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:31.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.293 --rc genhtml_branch_coverage=1 00:17:31.293 --rc genhtml_function_coverage=1 00:17:31.293 --rc genhtml_legend=1 00:17:31.293 --rc geninfo_all_blocks=1 00:17:31.293 --rc geninfo_unexecuted_blocks=1 00:17:31.293 00:17:31.293 ' 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:31.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.293 --rc genhtml_branch_coverage=1 00:17:31.293 --rc genhtml_function_coverage=1 00:17:31.293 --rc genhtml_legend=1 00:17:31.293 --rc geninfo_all_blocks=1 00:17:31.293 --rc geninfo_unexecuted_blocks=1 00:17:31.293 00:17:31.293 ' 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:31.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.293 --rc genhtml_branch_coverage=1 00:17:31.293 --rc genhtml_function_coverage=1 00:17:31.293 --rc genhtml_legend=1 00:17:31.293 --rc geninfo_all_blocks=1 00:17:31.293 --rc geninfo_unexecuted_blocks=1 00:17:31.293 00:17:31.293 ' 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.293 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:31.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:31.294 21:07:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:33.824 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.824 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:33.824 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:33.824 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:33.824 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:33.824 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:33.824 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:33.824 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:33.824 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:33.824 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:33.824 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:33.824 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:33.824 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:33.824 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:33.824 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:33.824 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.824 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.824 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.824 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:33.825 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:33.825 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:33.825 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:33.825 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:33.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:17:33.825 00:17:33.825 --- 10.0.0.2 ping statistics --- 00:17:33.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.825 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:33.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:17:33.825 00:17:33.825 --- 10.0.0.1 ping statistics --- 00:17:33.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.825 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2974941 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2974941 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2974941 ']' 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.825 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:33.826 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.826 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:33.826 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:33.826 [2024-11-19 21:07:07.457766] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:33.826 [2024-11-19 21:07:07.457905] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.826 [2024-11-19 21:07:07.599933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:34.084 [2024-11-19 21:07:07.737304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.084 [2024-11-19 21:07:07.737383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.084 [2024-11-19 21:07:07.737409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.084 [2024-11-19 21:07:07.737433] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.084 [2024-11-19 21:07:07.737461] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.084 [2024-11-19 21:07:07.740225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.084 [2024-11-19 21:07:07.740287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.084 [2024-11-19 21:07:07.740338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.084 [2024-11-19 21:07:07.740344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:35.016 21:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:35.016 21:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:35.016 21:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:35.016 21:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:35.016 21:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:35.016 21:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.016 21:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:35.016 21:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11170 00:17:35.016 [2024-11-19 21:07:08.797131] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:35.273 21:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:35.273 { 00:17:35.273 "nqn": "nqn.2016-06.io.spdk:cnode11170", 00:17:35.273 "tgt_name": "foobar", 00:17:35.273 "method": "nvmf_create_subsystem", 00:17:35.273 "req_id": 1 00:17:35.273 } 00:17:35.273 Got JSON-RPC error response 00:17:35.273 response: 00:17:35.273 { 00:17:35.273 "code": -32603, 00:17:35.273 "message": "Unable to find target foobar" 00:17:35.273 }' 00:17:35.273 21:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:35.273 { 00:17:35.273 "nqn": "nqn.2016-06.io.spdk:cnode11170", 00:17:35.273 "tgt_name": "foobar", 00:17:35.273 "method": "nvmf_create_subsystem", 00:17:35.273 "req_id": 1 00:17:35.273 } 00:17:35.273 Got JSON-RPC error response 00:17:35.273 response: 00:17:35.273 { 00:17:35.273 "code": -32603, 00:17:35.273 "message": "Unable to find target foobar" 00:17:35.273 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:35.273 21:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:35.274 21:07:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25496 00:17:35.532 [2024-11-19 21:07:09.078114] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25496: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:35.532 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:35.532 { 00:17:35.532 "nqn": "nqn.2016-06.io.spdk:cnode25496", 00:17:35.532 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:35.532 "method": "nvmf_create_subsystem", 00:17:35.532 "req_id": 1 00:17:35.532 } 00:17:35.532 Got JSON-RPC error response 00:17:35.532 response: 00:17:35.532 { 00:17:35.532 "code": -32602, 00:17:35.532 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:35.532 }' 00:17:35.532 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:35.532 { 00:17:35.532 "nqn": "nqn.2016-06.io.spdk:cnode25496", 00:17:35.532 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:35.532 "method": "nvmf_create_subsystem", 00:17:35.532 "req_id": 1 00:17:35.532 } 00:17:35.532 Got JSON-RPC error response 00:17:35.532 response: 00:17:35.532 { 00:17:35.532 "code": -32602, 00:17:35.532 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:35.532 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:35.532 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:35.532 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode19272 00:17:35.791 [2024-11-19 21:07:09.347006] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19272: invalid model number 'SPDK_Controller' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:35.791 { 00:17:35.791 "nqn": "nqn.2016-06.io.spdk:cnode19272", 00:17:35.791 "model_number": "SPDK_Controller\u001f", 00:17:35.791 "method": "nvmf_create_subsystem", 00:17:35.791 "req_id": 1 00:17:35.791 } 00:17:35.791 Got JSON-RPC error response 00:17:35.791 response: 00:17:35.791 { 00:17:35.791 "code": -32602, 00:17:35.791 "message": "Invalid MN SPDK_Controller\u001f" 00:17:35.791 }' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:35.791 { 00:17:35.791 "nqn": "nqn.2016-06.io.spdk:cnode19272", 00:17:35.791 "model_number": "SPDK_Controller\u001f", 00:17:35.791 "method": "nvmf_create_subsystem", 00:17:35.791 "req_id": 1 00:17:35.791 } 00:17:35.791 Got JSON-RPC error response 00:17:35.791 response: 00:17:35.791 { 00:17:35.791 "code": -32602, 00:17:35.791 "message": "Invalid MN SPDK_Controller\u001f" 00:17:35.791 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.791 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ > == \- ]] 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '>&%^I KLnyr{Jsz%ULiN'\''' 00:17:35.792 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '>&%^I KLnyr{Jsz%ULiN'\''' nqn.2016-06.io.spdk:cnode29512 00:17:36.050 [2024-11-19 21:07:09.708214] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29512: invalid serial number '>&%^I KLnyr{Jsz%ULiN'' 00:17:36.050 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:36.050 { 00:17:36.050 "nqn": "nqn.2016-06.io.spdk:cnode29512", 00:17:36.050 "serial_number": ">&%^I KLnyr{Jsz%ULiN'\''", 00:17:36.050 "method": "nvmf_create_subsystem", 00:17:36.050 "req_id": 1 00:17:36.050 } 00:17:36.050 Got JSON-RPC error response 00:17:36.050 response: 00:17:36.050 { 00:17:36.050 "code": -32602, 00:17:36.050 "message": "Invalid SN >&%^I KLnyr{Jsz%ULiN'\''" 00:17:36.050 }' 00:17:36.050 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:36.050 { 00:17:36.050 "nqn": "nqn.2016-06.io.spdk:cnode29512", 00:17:36.050 "serial_number": ">&%^I KLnyr{Jsz%ULiN'", 00:17:36.050 "method": "nvmf_create_subsystem", 00:17:36.050 "req_id": 1 00:17:36.050 } 00:17:36.050 Got JSON-RPC error response 00:17:36.050 response: 00:17:36.050 { 00:17:36.050 "code": -32602, 00:17:36.050 "message": "Invalid SN >&%^I KLnyr{Jsz%ULiN'" 00:17:36.050 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:36.050 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:36.050 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:36.050 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:36.050 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:36.050 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:36.050 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:36.050 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.050 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:36.050 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:36.050 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.051 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:36.052 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:36.310 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:36.311 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.311 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.311 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:36.311 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:36.311 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:36.311 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:36.311 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:36.311 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ D == \- ]] 00:17:36.311 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Dqq% L:KRT[B@GF(U*7)03*+>hJ!SYSu1|E8$,B_W' 00:17:36.311 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Dqq% L:KRT[B@GF(U*7)03*+>hJ!SYSu1|E8$,B_W' nqn.2016-06.io.spdk:cnode21870 00:17:36.568 [2024-11-19 21:07:10.141737] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21870: invalid model number 'Dqq% L:KRT[B@GF(U*7)03*+>hJ!SYSu1|E8$,B_W' 00:17:36.569 21:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:36.569 { 00:17:36.569 "nqn": "nqn.2016-06.io.spdk:cnode21870", 00:17:36.569 "model_number": "Dqq% L:KRT[B@GF(U*7)03*+>hJ!SYSu1|E8$,B_W", 00:17:36.569 "method": "nvmf_create_subsystem", 00:17:36.569 "req_id": 1 00:17:36.569 } 00:17:36.569 Got JSON-RPC error response 00:17:36.569 response: 00:17:36.569 { 00:17:36.569 "code": -32602, 00:17:36.569 "message": "Invalid MN Dqq% L:KRT[B@GF(U*7)03*+>hJ!SYSu1|E8$,B_W" 00:17:36.569 }' 00:17:36.569 21:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:36.569 { 00:17:36.569 "nqn": "nqn.2016-06.io.spdk:cnode21870", 00:17:36.569 "model_number": "Dqq% L:KRT[B@GF(U*7)03*+>hJ!SYSu1|E8$,B_W", 00:17:36.569 "method": "nvmf_create_subsystem", 00:17:36.569 "req_id": 1 00:17:36.569 } 00:17:36.569 Got JSON-RPC error response 00:17:36.569 response: 00:17:36.569 { 00:17:36.569 "code": -32602, 00:17:36.569 "message": "Invalid MN Dqq% L:KRT[B@GF(U*7)03*+>hJ!SYSu1|E8$,B_W" 00:17:36.569 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:36.569 21:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:36.826 [2024-11-19 21:07:10.418742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.826 21:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:37.084 21:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:37.084 21:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:37.084 21:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:37.084 21:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:37.084 21:07:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:37.342 [2024-11-19 21:07:10.979541] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:37.342 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:37.342 { 00:17:37.342 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:37.342 "listen_address": { 00:17:37.342 "trtype": "tcp", 00:17:37.342 "traddr": "", 00:17:37.342 "trsvcid": "4421" 00:17:37.342 }, 00:17:37.342 "method": "nvmf_subsystem_remove_listener", 00:17:37.342 "req_id": 1 00:17:37.342 } 00:17:37.342 Got JSON-RPC error response 00:17:37.342 response: 00:17:37.342 { 00:17:37.342 "code": -32602, 00:17:37.342 "message": "Invalid parameters" 00:17:37.342 }' 00:17:37.342 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:37.342 { 00:17:37.342 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:37.342 "listen_address": { 00:17:37.342 "trtype": "tcp", 00:17:37.342 "traddr": "", 00:17:37.342 "trsvcid": "4421" 00:17:37.342 }, 00:17:37.342 "method": "nvmf_subsystem_remove_listener", 00:17:37.342 "req_id": 1 00:17:37.342 } 00:17:37.342 Got JSON-RPC error response 00:17:37.342 response: 00:17:37.342 { 00:17:37.342 "code": -32602, 00:17:37.342 "message": "Invalid parameters" 00:17:37.342 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:37.342 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18485 -i 0 00:17:37.599 [2024-11-19 21:07:11.244422] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18485: invalid cntlid range [0-65519] 00:17:37.599 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:37.599 { 00:17:37.599 "nqn": "nqn.2016-06.io.spdk:cnode18485", 00:17:37.599 "min_cntlid": 0, 00:17:37.599 "method": "nvmf_create_subsystem", 00:17:37.599 "req_id": 1 00:17:37.599 } 00:17:37.599 Got JSON-RPC error response 00:17:37.599 response: 00:17:37.599 { 00:17:37.599 "code": -32602, 00:17:37.599 "message": "Invalid cntlid range [0-65519]" 00:17:37.599 }' 00:17:37.599 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:37.599 { 00:17:37.599 "nqn": "nqn.2016-06.io.spdk:cnode18485", 00:17:37.599 "min_cntlid": 0, 00:17:37.599 "method": "nvmf_create_subsystem", 00:17:37.599 "req_id": 1 00:17:37.600 } 00:17:37.600 Got JSON-RPC error response 00:17:37.600 response: 00:17:37.600 { 00:17:37.600 "code": -32602, 00:17:37.600 "message": "Invalid cntlid range [0-65519]" 00:17:37.600 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:37.600 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19302 -i 65520 00:17:37.857 [2024-11-19 21:07:11.517245] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19302: invalid cntlid range [65520-65519] 00:17:37.857 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:37.857 { 00:17:37.857 "nqn": "nqn.2016-06.io.spdk:cnode19302", 00:17:37.857 "min_cntlid": 65520, 00:17:37.857 "method": "nvmf_create_subsystem", 00:17:37.857 "req_id": 1 00:17:37.857 } 00:17:37.857 Got JSON-RPC error response 00:17:37.857 response: 00:17:37.857 { 00:17:37.857 "code": -32602, 00:17:37.857 "message": "Invalid cntlid range [65520-65519]" 00:17:37.857 }' 00:17:37.857 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:37.857 { 00:17:37.857 "nqn": "nqn.2016-06.io.spdk:cnode19302", 00:17:37.857 "min_cntlid": 65520, 00:17:37.857 "method": "nvmf_create_subsystem", 00:17:37.857 "req_id": 1 00:17:37.857 } 00:17:37.857 Got JSON-RPC error response 00:17:37.857 response: 00:17:37.857 { 00:17:37.857 "code": -32602, 00:17:37.857 "message": "Invalid cntlid range [65520-65519]" 00:17:37.857 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:37.857 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24264 -I 0 00:17:38.115 [2024-11-19 21:07:11.782203] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24264: invalid cntlid range [1-0] 00:17:38.115 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:38.115 { 00:17:38.115 "nqn": "nqn.2016-06.io.spdk:cnode24264", 00:17:38.115 "max_cntlid": 0, 00:17:38.115 "method": "nvmf_create_subsystem", 00:17:38.115 "req_id": 1 00:17:38.115 } 00:17:38.115 Got JSON-RPC error response 00:17:38.115 response: 00:17:38.115 { 00:17:38.115 "code": -32602, 00:17:38.115 "message": "Invalid cntlid range [1-0]" 00:17:38.115 }' 00:17:38.115 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:38.115 { 00:17:38.115 "nqn": "nqn.2016-06.io.spdk:cnode24264", 00:17:38.115 "max_cntlid": 0, 00:17:38.115 "method": "nvmf_create_subsystem", 00:17:38.115 "req_id": 1 00:17:38.115 } 00:17:38.115 Got JSON-RPC error response 00:17:38.115 response: 00:17:38.115 { 00:17:38.115 "code": -32602, 00:17:38.115 "message": "Invalid cntlid range [1-0]" 00:17:38.115 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:38.115 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24117 -I 65520 00:17:38.373 [2024-11-19 21:07:12.067213] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24117: invalid cntlid range [1-65520] 00:17:38.374 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:38.374 { 00:17:38.374 "nqn": "nqn.2016-06.io.spdk:cnode24117", 00:17:38.374 "max_cntlid": 65520, 00:17:38.374 "method": "nvmf_create_subsystem", 00:17:38.374 "req_id": 1 00:17:38.374 } 00:17:38.374 Got JSON-RPC error response 00:17:38.374 response: 00:17:38.374 { 00:17:38.374 "code": -32602, 00:17:38.374 "message": "Invalid cntlid range [1-65520]" 00:17:38.374 }' 00:17:38.374 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:38.374 { 00:17:38.374 "nqn": "nqn.2016-06.io.spdk:cnode24117", 00:17:38.374 "max_cntlid": 65520, 00:17:38.374 "method": "nvmf_create_subsystem", 00:17:38.374 "req_id": 1 00:17:38.374 } 00:17:38.374 Got JSON-RPC error response 00:17:38.374 response: 00:17:38.374 { 00:17:38.374 "code": -32602, 00:17:38.374 "message": "Invalid cntlid range [1-65520]" 00:17:38.374 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:38.374 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13951 -i 6 -I 5 00:17:38.632 [2024-11-19 21:07:12.332079] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13951: invalid cntlid range [6-5] 00:17:38.632 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:38.632 { 00:17:38.632 "nqn": "nqn.2016-06.io.spdk:cnode13951", 00:17:38.632 "min_cntlid": 6, 00:17:38.632 "max_cntlid": 5, 00:17:38.632 "method": "nvmf_create_subsystem", 00:17:38.632 "req_id": 1 00:17:38.632 } 00:17:38.632 Got JSON-RPC error response 00:17:38.632 response: 00:17:38.632 { 00:17:38.632 "code": -32602, 00:17:38.632 "message": "Invalid cntlid range [6-5]" 00:17:38.632 }' 00:17:38.632 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:38.632 { 00:17:38.632 "nqn": "nqn.2016-06.io.spdk:cnode13951", 00:17:38.632 "min_cntlid": 6, 00:17:38.632 "max_cntlid": 5, 00:17:38.632 "method": "nvmf_create_subsystem", 00:17:38.632 "req_id": 1 00:17:38.632 } 00:17:38.632 Got JSON-RPC error response 00:17:38.632 response: 00:17:38.632 { 00:17:38.632 "code": -32602, 00:17:38.632 "message": "Invalid cntlid range [6-5]" 00:17:38.632 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:38.632 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:38.889 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:38.889 { 00:17:38.889 "name": "foobar", 00:17:38.889 "method": "nvmf_delete_target", 00:17:38.889 "req_id": 1 00:17:38.889 } 00:17:38.889 Got JSON-RPC error response 00:17:38.889 response: 00:17:38.889 { 00:17:38.889 "code": -32602, 00:17:38.889 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:38.889 }' 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:38.890 { 00:17:38.890 "name": "foobar", 00:17:38.890 "method": "nvmf_delete_target", 00:17:38.890 "req_id": 1 00:17:38.890 } 00:17:38.890 Got JSON-RPC error response 00:17:38.890 response: 00:17:38.890 { 00:17:38.890 "code": -32602, 00:17:38.890 "message": "The specified target doesn't exist, cannot delete it." 00:17:38.890 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:38.890 rmmod nvme_tcp 00:17:38.890 rmmod nvme_fabrics 00:17:38.890 rmmod nvme_keyring 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2974941 ']' 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2974941 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2974941 ']' 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2974941 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2974941 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2974941' 00:17:38.890 killing process with pid 2974941 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2974941 00:17:38.890 21:07:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2974941 00:17:40.264 21:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:40.264 21:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:40.264 21:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:40.264 21:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:40.264 21:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:40.264 21:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:40.264 21:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:40.264 21:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:40.264 21:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:40.264 21:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.264 21:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.264 21:07:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.170 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:42.170 00:17:42.170 real 0m10.904s 00:17:42.170 user 0m27.343s 00:17:42.170 sys 0m2.763s 00:17:42.170 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:42.170 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:42.170 ************************************ 00:17:42.170 END TEST nvmf_invalid 00:17:42.170 ************************************ 00:17:42.170 21:07:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:42.170 21:07:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:42.170 21:07:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:42.170 21:07:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:42.170 ************************************ 00:17:42.170 START TEST nvmf_connect_stress 00:17:42.170 ************************************ 00:17:42.170 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:42.171 * Looking for test storage... 00:17:42.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:42.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.171 --rc genhtml_branch_coverage=1 00:17:42.171 --rc genhtml_function_coverage=1 00:17:42.171 --rc genhtml_legend=1 00:17:42.171 --rc geninfo_all_blocks=1 00:17:42.171 --rc geninfo_unexecuted_blocks=1 00:17:42.171 00:17:42.171 ' 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:42.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.171 --rc genhtml_branch_coverage=1 00:17:42.171 --rc genhtml_function_coverage=1 00:17:42.171 --rc genhtml_legend=1 00:17:42.171 --rc geninfo_all_blocks=1 00:17:42.171 --rc geninfo_unexecuted_blocks=1 00:17:42.171 00:17:42.171 ' 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:42.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.171 --rc genhtml_branch_coverage=1 00:17:42.171 --rc genhtml_function_coverage=1 00:17:42.171 --rc genhtml_legend=1 00:17:42.171 --rc geninfo_all_blocks=1 00:17:42.171 --rc geninfo_unexecuted_blocks=1 00:17:42.171 00:17:42.171 ' 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:42.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.171 --rc genhtml_branch_coverage=1 00:17:42.171 --rc genhtml_function_coverage=1 00:17:42.171 --rc genhtml_legend=1 00:17:42.171 --rc geninfo_all_blocks=1 00:17:42.171 --rc geninfo_unexecuted_blocks=1 00:17:42.171 00:17:42.171 ' 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.171 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:42.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:42.172 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.705 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:44.705 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:44.705 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:44.706 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:44.706 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:44.706 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:44.706 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:44.706 21:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:44.706 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:44.706 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:44.706 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:44.706 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:44.706 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:44.706 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:44.706 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:44.706 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:44.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:44.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:17:44.706 00:17:44.706 --- 10.0.0.2 ping statistics --- 00:17:44.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.706 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:17:44.706 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:44.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:44.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:17:44.706 00:17:44.706 --- 10.0.0.1 ping statistics --- 00:17:44.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.707 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2977831 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2977831 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2977831 ']' 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.707 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.707 [2024-11-19 21:07:18.190439] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:17:44.707 [2024-11-19 21:07:18.190583] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.707 [2024-11-19 21:07:18.342430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:44.707 [2024-11-19 21:07:18.485361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.707 [2024-11-19 21:07:18.485442] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.707 [2024-11-19 21:07:18.485467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.707 [2024-11-19 21:07:18.485491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.707 [2024-11-19 21:07:18.485511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.707 [2024-11-19 21:07:18.488124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.707 [2024-11-19 21:07:18.488157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:44.707 [2024-11-19 21:07:18.488152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.649 [2024-11-19 21:07:19.164857] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.649 [2024-11-19 21:07:19.185210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.649 NULL1 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2977985 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.649 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.909 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.909 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:45.909 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.909 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.909 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.167 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.167 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:46.167 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.167 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.167 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.793 21:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.793 21:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:46.793 21:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.793 21:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.793 21:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.793 21:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.793 21:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:46.793 21:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.793 21:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.793 21:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.359 21:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.359 21:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:47.359 21:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.359 21:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.359 21:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.617 21:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.617 21:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:47.617 21:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.617 21:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.617 21:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.875 21:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.875 21:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:47.875 21:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.876 21:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.876 21:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.134 21:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.134 21:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:48.134 21:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.134 21:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.134 21:07:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.392 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.392 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:48.392 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.392 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.392 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.958 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.958 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:48.958 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.958 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.958 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.217 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.217 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:49.217 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.217 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.217 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.475 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.475 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:49.475 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.475 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.475 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.733 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.733 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:49.733 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.733 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.733 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.300 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.300 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:50.300 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.300 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.300 21:07:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.558 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.558 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:50.558 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.558 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.558 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.816 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.816 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:50.816 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.816 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.816 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.074 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.074 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:51.074 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.074 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.074 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.332 21:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.332 21:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:51.332 21:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.332 21:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.332 21:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.898 21:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.898 21:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:51.898 21:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.898 21:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.898 21:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.156 21:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.156 21:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:52.156 21:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.156 21:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.156 21:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.415 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.415 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:52.415 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.415 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.415 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.673 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.673 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:52.673 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.673 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.673 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.932 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.932 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:52.932 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.932 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.932 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.497 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.497 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:53.497 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.497 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.497 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.756 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.756 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:53.756 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.756 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.756 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.043 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.043 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:54.043 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.043 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.043 21:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.301 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.301 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:54.301 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.301 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.301 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.559 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.559 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:54.559 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.559 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.559 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.126 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.126 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:55.126 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.126 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.126 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.384 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.384 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:55.384 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.384 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.384 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.641 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.641 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:55.641 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.641 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.641 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.641 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:55.899 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.899 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2977985 00:17:55.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2977985) - No such process 00:17:55.899 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2977985 00:17:55.899 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:55.899 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:55.899 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:55.899 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:55.899 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:55.899 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:55.899 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:55.899 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:55.899 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:55.899 rmmod nvme_tcp 00:17:55.899 rmmod nvme_fabrics 00:17:55.899 rmmod nvme_keyring 00:17:56.158 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:56.158 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:56.158 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:56.158 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2977831 ']' 00:17:56.158 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2977831 00:17:56.158 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2977831 ']' 00:17:56.158 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2977831 00:17:56.158 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:56.158 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.158 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2977831 00:17:56.158 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:56.158 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:56.158 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2977831' 00:17:56.158 killing process with pid 2977831 00:17:56.158 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2977831 00:17:56.158 21:07:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2977831 00:17:57.092 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:57.092 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:57.092 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:57.092 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:57.092 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:57.092 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:57.092 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:57.092 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:57.092 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:57.092 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.092 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.092 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.631 21:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:59.631 00:17:59.631 real 0m17.176s 00:17:59.631 user 0m42.708s 00:17:59.631 sys 0m6.141s 00:17:59.631 21:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.631 21:07:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.631 ************************************ 00:17:59.631 END TEST nvmf_connect_stress 00:17:59.631 ************************************ 00:17:59.631 21:07:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:59.631 21:07:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:59.631 21:07:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.631 21:07:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:59.631 ************************************ 00:17:59.631 START TEST nvmf_fused_ordering 00:17:59.631 ************************************ 00:17:59.631 21:07:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:59.631 * Looking for test storage... 00:17:59.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:59.631 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:59.631 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:17:59.631 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:59.631 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:59.631 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:59.631 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:59.631 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:59.631 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:59.631 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:59.631 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:59.631 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:59.631 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:59.631 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:59.631 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:59.631 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:59.631 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:59.631 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:59.631 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:59.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.632 --rc genhtml_branch_coverage=1 00:17:59.632 --rc genhtml_function_coverage=1 00:17:59.632 --rc genhtml_legend=1 00:17:59.632 --rc geninfo_all_blocks=1 00:17:59.632 --rc geninfo_unexecuted_blocks=1 00:17:59.632 00:17:59.632 ' 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:59.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.632 --rc genhtml_branch_coverage=1 00:17:59.632 --rc genhtml_function_coverage=1 00:17:59.632 --rc genhtml_legend=1 00:17:59.632 --rc geninfo_all_blocks=1 00:17:59.632 --rc geninfo_unexecuted_blocks=1 00:17:59.632 00:17:59.632 ' 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:59.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.632 --rc genhtml_branch_coverage=1 00:17:59.632 --rc genhtml_function_coverage=1 00:17:59.632 --rc genhtml_legend=1 00:17:59.632 --rc geninfo_all_blocks=1 00:17:59.632 --rc geninfo_unexecuted_blocks=1 00:17:59.632 00:17:59.632 ' 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:59.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.632 --rc genhtml_branch_coverage=1 00:17:59.632 --rc genhtml_function_coverage=1 00:17:59.632 --rc genhtml_legend=1 00:17:59.632 --rc geninfo_all_blocks=1 00:17:59.632 --rc geninfo_unexecuted_blocks=1 00:17:59.632 00:17:59.632 ' 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:59.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:59.632 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:01.538 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:01.538 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:01.539 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:01.539 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:01.539 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:01.539 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:01.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:18:01.798 00:18:01.798 --- 10.0.0.2 ping statistics --- 00:18:01.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.798 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:01.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:18:01.798 00:18:01.798 --- 10.0.0.1 ping statistics --- 00:18:01.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.798 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2981264 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2981264 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2981264 ']' 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.798 21:07:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:01.798 [2024-11-19 21:07:35.470102] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:18:01.798 [2024-11-19 21:07:35.470277] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.057 [2024-11-19 21:07:35.677395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.057 [2024-11-19 21:07:35.826314] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.057 [2024-11-19 21:07:35.826393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.057 [2024-11-19 21:07:35.826418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.057 [2024-11-19 21:07:35.826442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.057 [2024-11-19 21:07:35.826461] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.057 [2024-11-19 21:07:35.828055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.994 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.994 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:18:02.994 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:02.994 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:02.994 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.994 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.994 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:02.994 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.994 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.994 [2024-11-19 21:07:36.549515] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.994 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.994 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:02.994 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.995 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.995 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.995 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.995 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.995 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.995 [2024-11-19 21:07:36.565754] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.995 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.995 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:02.995 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.995 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.995 NULL1 00:18:02.995 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.995 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:02.995 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.995 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.995 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.995 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:02.995 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.995 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.995 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.995 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:02.995 [2024-11-19 21:07:36.632845] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:18:02.995 [2024-11-19 21:07:36.632932] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2981421 ] 00:18:03.562 Attached to nqn.2016-06.io.spdk:cnode1 00:18:03.562 Namespace ID: 1 size: 1GB 00:18:03.562 fused_ordering(0) 00:18:03.562 fused_ordering(1) 00:18:03.562 fused_ordering(2) 00:18:03.562 fused_ordering(3) 00:18:03.562 fused_ordering(4) 00:18:03.562 fused_ordering(5) 00:18:03.562 fused_ordering(6) 00:18:03.562 fused_ordering(7) 00:18:03.562 fused_ordering(8) 00:18:03.562 fused_ordering(9) 00:18:03.562 fused_ordering(10) 00:18:03.562 fused_ordering(11) 00:18:03.562 fused_ordering(12) 00:18:03.562 fused_ordering(13) 00:18:03.562 fused_ordering(14) 00:18:03.562 fused_ordering(15) 00:18:03.562 fused_ordering(16) 00:18:03.562 fused_ordering(17) 00:18:03.562 fused_ordering(18) 00:18:03.562 fused_ordering(19) 00:18:03.562 fused_ordering(20) 00:18:03.562 fused_ordering(21) 00:18:03.562 fused_ordering(22) 00:18:03.562 fused_ordering(23) 00:18:03.562 fused_ordering(24) 00:18:03.562 fused_ordering(25) 00:18:03.562 fused_ordering(26) 00:18:03.562 fused_ordering(27) 00:18:03.562 fused_ordering(28) 00:18:03.562 fused_ordering(29) 00:18:03.562 fused_ordering(30) 00:18:03.562 fused_ordering(31) 00:18:03.562 fused_ordering(32) 00:18:03.562 fused_ordering(33) 00:18:03.562 fused_ordering(34) 00:18:03.562 fused_ordering(35) 00:18:03.562 fused_ordering(36) 00:18:03.562 fused_ordering(37) 00:18:03.562 fused_ordering(38) 00:18:03.562 fused_ordering(39) 00:18:03.562 fused_ordering(40) 00:18:03.562 fused_ordering(41) 00:18:03.562 fused_ordering(42) 00:18:03.562 fused_ordering(43) 00:18:03.562 fused_ordering(44) 00:18:03.562 fused_ordering(45) 00:18:03.562 fused_ordering(46) 00:18:03.562 fused_ordering(47) 00:18:03.562 fused_ordering(48) 00:18:03.562 fused_ordering(49) 00:18:03.562 fused_ordering(50) 00:18:03.562 fused_ordering(51) 00:18:03.562 fused_ordering(52) 00:18:03.562 fused_ordering(53) 00:18:03.562 fused_ordering(54) 00:18:03.562 fused_ordering(55) 00:18:03.562 fused_ordering(56) 00:18:03.562 fused_ordering(57) 00:18:03.562 fused_ordering(58) 00:18:03.562 fused_ordering(59) 00:18:03.562 fused_ordering(60) 00:18:03.562 fused_ordering(61) 00:18:03.562 fused_ordering(62) 00:18:03.562 fused_ordering(63) 00:18:03.562 fused_ordering(64) 00:18:03.562 fused_ordering(65) 00:18:03.562 fused_ordering(66) 00:18:03.562 fused_ordering(67) 00:18:03.562 fused_ordering(68) 00:18:03.562 fused_ordering(69) 00:18:03.562 fused_ordering(70) 00:18:03.562 fused_ordering(71) 00:18:03.562 fused_ordering(72) 00:18:03.562 fused_ordering(73) 00:18:03.562 fused_ordering(74) 00:18:03.562 fused_ordering(75) 00:18:03.562 fused_ordering(76) 00:18:03.562 fused_ordering(77) 00:18:03.562 fused_ordering(78) 00:18:03.562 fused_ordering(79) 00:18:03.562 fused_ordering(80) 00:18:03.562 fused_ordering(81) 00:18:03.562 fused_ordering(82) 00:18:03.562 fused_ordering(83) 00:18:03.562 fused_ordering(84) 00:18:03.562 fused_ordering(85) 00:18:03.562 fused_ordering(86) 00:18:03.562 fused_ordering(87) 00:18:03.562 fused_ordering(88) 00:18:03.562 fused_ordering(89) 00:18:03.562 fused_ordering(90) 00:18:03.562 fused_ordering(91) 00:18:03.562 fused_ordering(92) 00:18:03.562 fused_ordering(93) 00:18:03.562 fused_ordering(94) 00:18:03.562 fused_ordering(95) 00:18:03.562 fused_ordering(96) 00:18:03.562 fused_ordering(97) 00:18:03.562 fused_ordering(98) 00:18:03.562 fused_ordering(99) 00:18:03.562 fused_ordering(100) 00:18:03.562 fused_ordering(101) 00:18:03.562 fused_ordering(102) 00:18:03.562 fused_ordering(103) 00:18:03.562 fused_ordering(104) 00:18:03.562 fused_ordering(105) 00:18:03.562 fused_ordering(106) 00:18:03.562 fused_ordering(107) 00:18:03.562 fused_ordering(108) 00:18:03.562 fused_ordering(109) 00:18:03.562 fused_ordering(110) 00:18:03.562 fused_ordering(111) 00:18:03.562 fused_ordering(112) 00:18:03.562 fused_ordering(113) 00:18:03.562 fused_ordering(114) 00:18:03.562 fused_ordering(115) 00:18:03.562 fused_ordering(116) 00:18:03.562 fused_ordering(117) 00:18:03.562 fused_ordering(118) 00:18:03.562 fused_ordering(119) 00:18:03.562 fused_ordering(120) 00:18:03.562 fused_ordering(121) 00:18:03.562 fused_ordering(122) 00:18:03.562 fused_ordering(123) 00:18:03.562 fused_ordering(124) 00:18:03.562 fused_ordering(125) 00:18:03.562 fused_ordering(126) 00:18:03.562 fused_ordering(127) 00:18:03.562 fused_ordering(128) 00:18:03.562 fused_ordering(129) 00:18:03.562 fused_ordering(130) 00:18:03.562 fused_ordering(131) 00:18:03.562 fused_ordering(132) 00:18:03.562 fused_ordering(133) 00:18:03.562 fused_ordering(134) 00:18:03.562 fused_ordering(135) 00:18:03.562 fused_ordering(136) 00:18:03.562 fused_ordering(137) 00:18:03.562 fused_ordering(138) 00:18:03.562 fused_ordering(139) 00:18:03.562 fused_ordering(140) 00:18:03.562 fused_ordering(141) 00:18:03.562 fused_ordering(142) 00:18:03.562 fused_ordering(143) 00:18:03.562 fused_ordering(144) 00:18:03.562 fused_ordering(145) 00:18:03.562 fused_ordering(146) 00:18:03.562 fused_ordering(147) 00:18:03.562 fused_ordering(148) 00:18:03.562 fused_ordering(149) 00:18:03.562 fused_ordering(150) 00:18:03.562 fused_ordering(151) 00:18:03.562 fused_ordering(152) 00:18:03.562 fused_ordering(153) 00:18:03.562 fused_ordering(154) 00:18:03.562 fused_ordering(155) 00:18:03.562 fused_ordering(156) 00:18:03.562 fused_ordering(157) 00:18:03.562 fused_ordering(158) 00:18:03.562 fused_ordering(159) 00:18:03.562 fused_ordering(160) 00:18:03.562 fused_ordering(161) 00:18:03.562 fused_ordering(162) 00:18:03.562 fused_ordering(163) 00:18:03.562 fused_ordering(164) 00:18:03.562 fused_ordering(165) 00:18:03.562 fused_ordering(166) 00:18:03.562 fused_ordering(167) 00:18:03.562 fused_ordering(168) 00:18:03.562 fused_ordering(169) 00:18:03.562 fused_ordering(170) 00:18:03.562 fused_ordering(171) 00:18:03.562 fused_ordering(172) 00:18:03.562 fused_ordering(173) 00:18:03.562 fused_ordering(174) 00:18:03.562 fused_ordering(175) 00:18:03.562 fused_ordering(176) 00:18:03.562 fused_ordering(177) 00:18:03.562 fused_ordering(178) 00:18:03.562 fused_ordering(179) 00:18:03.562 fused_ordering(180) 00:18:03.562 fused_ordering(181) 00:18:03.562 fused_ordering(182) 00:18:03.562 fused_ordering(183) 00:18:03.562 fused_ordering(184) 00:18:03.562 fused_ordering(185) 00:18:03.562 fused_ordering(186) 00:18:03.562 fused_ordering(187) 00:18:03.562 fused_ordering(188) 00:18:03.562 fused_ordering(189) 00:18:03.562 fused_ordering(190) 00:18:03.562 fused_ordering(191) 00:18:03.562 fused_ordering(192) 00:18:03.562 fused_ordering(193) 00:18:03.562 fused_ordering(194) 00:18:03.562 fused_ordering(195) 00:18:03.562 fused_ordering(196) 00:18:03.563 fused_ordering(197) 00:18:03.563 fused_ordering(198) 00:18:03.563 fused_ordering(199) 00:18:03.563 fused_ordering(200) 00:18:03.563 fused_ordering(201) 00:18:03.563 fused_ordering(202) 00:18:03.563 fused_ordering(203) 00:18:03.563 fused_ordering(204) 00:18:03.563 fused_ordering(205) 00:18:04.130 fused_ordering(206) 00:18:04.130 fused_ordering(207) 00:18:04.130 fused_ordering(208) 00:18:04.130 fused_ordering(209) 00:18:04.130 fused_ordering(210) 00:18:04.130 fused_ordering(211) 00:18:04.130 fused_ordering(212) 00:18:04.130 fused_ordering(213) 00:18:04.130 fused_ordering(214) 00:18:04.130 fused_ordering(215) 00:18:04.130 fused_ordering(216) 00:18:04.130 fused_ordering(217) 00:18:04.130 fused_ordering(218) 00:18:04.130 fused_ordering(219) 00:18:04.130 fused_ordering(220) 00:18:04.130 fused_ordering(221) 00:18:04.130 fused_ordering(222) 00:18:04.130 fused_ordering(223) 00:18:04.130 fused_ordering(224) 00:18:04.130 fused_ordering(225) 00:18:04.130 fused_ordering(226) 00:18:04.130 fused_ordering(227) 00:18:04.130 fused_ordering(228) 00:18:04.130 fused_ordering(229) 00:18:04.130 fused_ordering(230) 00:18:04.130 fused_ordering(231) 00:18:04.130 fused_ordering(232) 00:18:04.130 fused_ordering(233) 00:18:04.130 fused_ordering(234) 00:18:04.130 fused_ordering(235) 00:18:04.130 fused_ordering(236) 00:18:04.130 fused_ordering(237) 00:18:04.130 fused_ordering(238) 00:18:04.130 fused_ordering(239) 00:18:04.130 fused_ordering(240) 00:18:04.130 fused_ordering(241) 00:18:04.130 fused_ordering(242) 00:18:04.130 fused_ordering(243) 00:18:04.130 fused_ordering(244) 00:18:04.130 fused_ordering(245) 00:18:04.130 fused_ordering(246) 00:18:04.130 fused_ordering(247) 00:18:04.130 fused_ordering(248) 00:18:04.130 fused_ordering(249) 00:18:04.130 fused_ordering(250) 00:18:04.130 fused_ordering(251) 00:18:04.130 fused_ordering(252) 00:18:04.130 fused_ordering(253) 00:18:04.130 fused_ordering(254) 00:18:04.130 fused_ordering(255) 00:18:04.130 fused_ordering(256) 00:18:04.130 fused_ordering(257) 00:18:04.130 fused_ordering(258) 00:18:04.130 fused_ordering(259) 00:18:04.130 fused_ordering(260) 00:18:04.130 fused_ordering(261) 00:18:04.130 fused_ordering(262) 00:18:04.130 fused_ordering(263) 00:18:04.130 fused_ordering(264) 00:18:04.130 fused_ordering(265) 00:18:04.130 fused_ordering(266) 00:18:04.130 fused_ordering(267) 00:18:04.130 fused_ordering(268) 00:18:04.130 fused_ordering(269) 00:18:04.130 fused_ordering(270) 00:18:04.130 fused_ordering(271) 00:18:04.130 fused_ordering(272) 00:18:04.130 fused_ordering(273) 00:18:04.130 fused_ordering(274) 00:18:04.130 fused_ordering(275) 00:18:04.130 fused_ordering(276) 00:18:04.130 fused_ordering(277) 00:18:04.130 fused_ordering(278) 00:18:04.130 fused_ordering(279) 00:18:04.130 fused_ordering(280) 00:18:04.130 fused_ordering(281) 00:18:04.130 fused_ordering(282) 00:18:04.130 fused_ordering(283) 00:18:04.130 fused_ordering(284) 00:18:04.130 fused_ordering(285) 00:18:04.130 fused_ordering(286) 00:18:04.130 fused_ordering(287) 00:18:04.130 fused_ordering(288) 00:18:04.130 fused_ordering(289) 00:18:04.130 fused_ordering(290) 00:18:04.130 fused_ordering(291) 00:18:04.130 fused_ordering(292) 00:18:04.130 fused_ordering(293) 00:18:04.130 fused_ordering(294) 00:18:04.130 fused_ordering(295) 00:18:04.130 fused_ordering(296) 00:18:04.130 fused_ordering(297) 00:18:04.130 fused_ordering(298) 00:18:04.130 fused_ordering(299) 00:18:04.130 fused_ordering(300) 00:18:04.130 fused_ordering(301) 00:18:04.130 fused_ordering(302) 00:18:04.130 fused_ordering(303) 00:18:04.130 fused_ordering(304) 00:18:04.130 fused_ordering(305) 00:18:04.130 fused_ordering(306) 00:18:04.130 fused_ordering(307) 00:18:04.130 fused_ordering(308) 00:18:04.130 fused_ordering(309) 00:18:04.130 fused_ordering(310) 00:18:04.130 fused_ordering(311) 00:18:04.130 fused_ordering(312) 00:18:04.130 fused_ordering(313) 00:18:04.130 fused_ordering(314) 00:18:04.130 fused_ordering(315) 00:18:04.130 fused_ordering(316) 00:18:04.130 fused_ordering(317) 00:18:04.130 fused_ordering(318) 00:18:04.130 fused_ordering(319) 00:18:04.130 fused_ordering(320) 00:18:04.130 fused_ordering(321) 00:18:04.130 fused_ordering(322) 00:18:04.130 fused_ordering(323) 00:18:04.130 fused_ordering(324) 00:18:04.130 fused_ordering(325) 00:18:04.130 fused_ordering(326) 00:18:04.130 fused_ordering(327) 00:18:04.130 fused_ordering(328) 00:18:04.130 fused_ordering(329) 00:18:04.130 fused_ordering(330) 00:18:04.130 fused_ordering(331) 00:18:04.130 fused_ordering(332) 00:18:04.130 fused_ordering(333) 00:18:04.130 fused_ordering(334) 00:18:04.130 fused_ordering(335) 00:18:04.130 fused_ordering(336) 00:18:04.130 fused_ordering(337) 00:18:04.130 fused_ordering(338) 00:18:04.130 fused_ordering(339) 00:18:04.130 fused_ordering(340) 00:18:04.130 fused_ordering(341) 00:18:04.130 fused_ordering(342) 00:18:04.130 fused_ordering(343) 00:18:04.130 fused_ordering(344) 00:18:04.130 fused_ordering(345) 00:18:04.130 fused_ordering(346) 00:18:04.130 fused_ordering(347) 00:18:04.130 fused_ordering(348) 00:18:04.130 fused_ordering(349) 00:18:04.130 fused_ordering(350) 00:18:04.130 fused_ordering(351) 00:18:04.130 fused_ordering(352) 00:18:04.130 fused_ordering(353) 00:18:04.130 fused_ordering(354) 00:18:04.130 fused_ordering(355) 00:18:04.130 fused_ordering(356) 00:18:04.130 fused_ordering(357) 00:18:04.130 fused_ordering(358) 00:18:04.130 fused_ordering(359) 00:18:04.130 fused_ordering(360) 00:18:04.130 fused_ordering(361) 00:18:04.130 fused_ordering(362) 00:18:04.130 fused_ordering(363) 00:18:04.130 fused_ordering(364) 00:18:04.130 fused_ordering(365) 00:18:04.130 fused_ordering(366) 00:18:04.130 fused_ordering(367) 00:18:04.130 fused_ordering(368) 00:18:04.130 fused_ordering(369) 00:18:04.130 fused_ordering(370) 00:18:04.130 fused_ordering(371) 00:18:04.130 fused_ordering(372) 00:18:04.130 fused_ordering(373) 00:18:04.130 fused_ordering(374) 00:18:04.130 fused_ordering(375) 00:18:04.130 fused_ordering(376) 00:18:04.130 fused_ordering(377) 00:18:04.130 fused_ordering(378) 00:18:04.130 fused_ordering(379) 00:18:04.130 fused_ordering(380) 00:18:04.130 fused_ordering(381) 00:18:04.130 fused_ordering(382) 00:18:04.130 fused_ordering(383) 00:18:04.130 fused_ordering(384) 00:18:04.130 fused_ordering(385) 00:18:04.130 fused_ordering(386) 00:18:04.130 fused_ordering(387) 00:18:04.130 fused_ordering(388) 00:18:04.130 fused_ordering(389) 00:18:04.130 fused_ordering(390) 00:18:04.130 fused_ordering(391) 00:18:04.130 fused_ordering(392) 00:18:04.130 fused_ordering(393) 00:18:04.130 fused_ordering(394) 00:18:04.130 fused_ordering(395) 00:18:04.130 fused_ordering(396) 00:18:04.130 fused_ordering(397) 00:18:04.130 fused_ordering(398) 00:18:04.130 fused_ordering(399) 00:18:04.130 fused_ordering(400) 00:18:04.130 fused_ordering(401) 00:18:04.130 fused_ordering(402) 00:18:04.130 fused_ordering(403) 00:18:04.130 fused_ordering(404) 00:18:04.130 fused_ordering(405) 00:18:04.130 fused_ordering(406) 00:18:04.130 fused_ordering(407) 00:18:04.130 fused_ordering(408) 00:18:04.130 fused_ordering(409) 00:18:04.130 fused_ordering(410) 00:18:04.698 fused_ordering(411) 00:18:04.698 fused_ordering(412) 00:18:04.698 fused_ordering(413) 00:18:04.698 fused_ordering(414) 00:18:04.698 fused_ordering(415) 00:18:04.698 fused_ordering(416) 00:18:04.698 fused_ordering(417) 00:18:04.698 fused_ordering(418) 00:18:04.698 fused_ordering(419) 00:18:04.698 fused_ordering(420) 00:18:04.698 fused_ordering(421) 00:18:04.698 fused_ordering(422) 00:18:04.698 fused_ordering(423) 00:18:04.698 fused_ordering(424) 00:18:04.698 fused_ordering(425) 00:18:04.698 fused_ordering(426) 00:18:04.698 fused_ordering(427) 00:18:04.698 fused_ordering(428) 00:18:04.698 fused_ordering(429) 00:18:04.698 fused_ordering(430) 00:18:04.698 fused_ordering(431) 00:18:04.698 fused_ordering(432) 00:18:04.698 fused_ordering(433) 00:18:04.698 fused_ordering(434) 00:18:04.698 fused_ordering(435) 00:18:04.698 fused_ordering(436) 00:18:04.698 fused_ordering(437) 00:18:04.698 fused_ordering(438) 00:18:04.698 fused_ordering(439) 00:18:04.698 fused_ordering(440) 00:18:04.698 fused_ordering(441) 00:18:04.698 fused_ordering(442) 00:18:04.698 fused_ordering(443) 00:18:04.698 fused_ordering(444) 00:18:04.698 fused_ordering(445) 00:18:04.698 fused_ordering(446) 00:18:04.698 fused_ordering(447) 00:18:04.698 fused_ordering(448) 00:18:04.698 fused_ordering(449) 00:18:04.698 fused_ordering(450) 00:18:04.698 fused_ordering(451) 00:18:04.698 fused_ordering(452) 00:18:04.698 fused_ordering(453) 00:18:04.698 fused_ordering(454) 00:18:04.698 fused_ordering(455) 00:18:04.698 fused_ordering(456) 00:18:04.698 fused_ordering(457) 00:18:04.698 fused_ordering(458) 00:18:04.698 fused_ordering(459) 00:18:04.698 fused_ordering(460) 00:18:04.698 fused_ordering(461) 00:18:04.698 fused_ordering(462) 00:18:04.698 fused_ordering(463) 00:18:04.698 fused_ordering(464) 00:18:04.698 fused_ordering(465) 00:18:04.698 fused_ordering(466) 00:18:04.698 fused_ordering(467) 00:18:04.698 fused_ordering(468) 00:18:04.698 fused_ordering(469) 00:18:04.698 fused_ordering(470) 00:18:04.698 fused_ordering(471) 00:18:04.698 fused_ordering(472) 00:18:04.698 fused_ordering(473) 00:18:04.698 fused_ordering(474) 00:18:04.698 fused_ordering(475) 00:18:04.698 fused_ordering(476) 00:18:04.698 fused_ordering(477) 00:18:04.698 fused_ordering(478) 00:18:04.698 fused_ordering(479) 00:18:04.698 fused_ordering(480) 00:18:04.698 fused_ordering(481) 00:18:04.698 fused_ordering(482) 00:18:04.698 fused_ordering(483) 00:18:04.698 fused_ordering(484) 00:18:04.698 fused_ordering(485) 00:18:04.698 fused_ordering(486) 00:18:04.698 fused_ordering(487) 00:18:04.698 fused_ordering(488) 00:18:04.698 fused_ordering(489) 00:18:04.698 fused_ordering(490) 00:18:04.698 fused_ordering(491) 00:18:04.698 fused_ordering(492) 00:18:04.698 fused_ordering(493) 00:18:04.698 fused_ordering(494) 00:18:04.699 fused_ordering(495) 00:18:04.699 fused_ordering(496) 00:18:04.699 fused_ordering(497) 00:18:04.699 fused_ordering(498) 00:18:04.699 fused_ordering(499) 00:18:04.699 fused_ordering(500) 00:18:04.699 fused_ordering(501) 00:18:04.699 fused_ordering(502) 00:18:04.699 fused_ordering(503) 00:18:04.699 fused_ordering(504) 00:18:04.699 fused_ordering(505) 00:18:04.699 fused_ordering(506) 00:18:04.699 fused_ordering(507) 00:18:04.699 fused_ordering(508) 00:18:04.699 fused_ordering(509) 00:18:04.699 fused_ordering(510) 00:18:04.699 fused_ordering(511) 00:18:04.699 fused_ordering(512) 00:18:04.699 fused_ordering(513) 00:18:04.699 fused_ordering(514) 00:18:04.699 fused_ordering(515) 00:18:04.699 fused_ordering(516) 00:18:04.699 fused_ordering(517) 00:18:04.699 fused_ordering(518) 00:18:04.699 fused_ordering(519) 00:18:04.699 fused_ordering(520) 00:18:04.699 fused_ordering(521) 00:18:04.699 fused_ordering(522) 00:18:04.699 fused_ordering(523) 00:18:04.699 fused_ordering(524) 00:18:04.699 fused_ordering(525) 00:18:04.699 fused_ordering(526) 00:18:04.699 fused_ordering(527) 00:18:04.699 fused_ordering(528) 00:18:04.699 fused_ordering(529) 00:18:04.699 fused_ordering(530) 00:18:04.699 fused_ordering(531) 00:18:04.699 fused_ordering(532) 00:18:04.699 fused_ordering(533) 00:18:04.699 fused_ordering(534) 00:18:04.699 fused_ordering(535) 00:18:04.699 fused_ordering(536) 00:18:04.699 fused_ordering(537) 00:18:04.699 fused_ordering(538) 00:18:04.699 fused_ordering(539) 00:18:04.699 fused_ordering(540) 00:18:04.699 fused_ordering(541) 00:18:04.699 fused_ordering(542) 00:18:04.699 fused_ordering(543) 00:18:04.699 fused_ordering(544) 00:18:04.699 fused_ordering(545) 00:18:04.699 fused_ordering(546) 00:18:04.699 fused_ordering(547) 00:18:04.699 fused_ordering(548) 00:18:04.699 fused_ordering(549) 00:18:04.699 fused_ordering(550) 00:18:04.699 fused_ordering(551) 00:18:04.699 fused_ordering(552) 00:18:04.699 fused_ordering(553) 00:18:04.699 fused_ordering(554) 00:18:04.699 fused_ordering(555) 00:18:04.699 fused_ordering(556) 00:18:04.699 fused_ordering(557) 00:18:04.699 fused_ordering(558) 00:18:04.699 fused_ordering(559) 00:18:04.699 fused_ordering(560) 00:18:04.699 fused_ordering(561) 00:18:04.699 fused_ordering(562) 00:18:04.699 fused_ordering(563) 00:18:04.699 fused_ordering(564) 00:18:04.699 fused_ordering(565) 00:18:04.699 fused_ordering(566) 00:18:04.699 fused_ordering(567) 00:18:04.699 fused_ordering(568) 00:18:04.699 fused_ordering(569) 00:18:04.699 fused_ordering(570) 00:18:04.699 fused_ordering(571) 00:18:04.699 fused_ordering(572) 00:18:04.699 fused_ordering(573) 00:18:04.699 fused_ordering(574) 00:18:04.699 fused_ordering(575) 00:18:04.699 fused_ordering(576) 00:18:04.699 fused_ordering(577) 00:18:04.699 fused_ordering(578) 00:18:04.699 fused_ordering(579) 00:18:04.699 fused_ordering(580) 00:18:04.699 fused_ordering(581) 00:18:04.699 fused_ordering(582) 00:18:04.699 fused_ordering(583) 00:18:04.699 fused_ordering(584) 00:18:04.699 fused_ordering(585) 00:18:04.699 fused_ordering(586) 00:18:04.699 fused_ordering(587) 00:18:04.699 fused_ordering(588) 00:18:04.699 fused_ordering(589) 00:18:04.699 fused_ordering(590) 00:18:04.699 fused_ordering(591) 00:18:04.699 fused_ordering(592) 00:18:04.699 fused_ordering(593) 00:18:04.699 fused_ordering(594) 00:18:04.699 fused_ordering(595) 00:18:04.699 fused_ordering(596) 00:18:04.699 fused_ordering(597) 00:18:04.699 fused_ordering(598) 00:18:04.699 fused_ordering(599) 00:18:04.699 fused_ordering(600) 00:18:04.699 fused_ordering(601) 00:18:04.699 fused_ordering(602) 00:18:04.699 fused_ordering(603) 00:18:04.699 fused_ordering(604) 00:18:04.699 fused_ordering(605) 00:18:04.699 fused_ordering(606) 00:18:04.699 fused_ordering(607) 00:18:04.699 fused_ordering(608) 00:18:04.699 fused_ordering(609) 00:18:04.699 fused_ordering(610) 00:18:04.699 fused_ordering(611) 00:18:04.699 fused_ordering(612) 00:18:04.699 fused_ordering(613) 00:18:04.699 fused_ordering(614) 00:18:04.699 fused_ordering(615) 00:18:05.265 fused_ordering(616) 00:18:05.265 fused_ordering(617) 00:18:05.265 fused_ordering(618) 00:18:05.265 fused_ordering(619) 00:18:05.265 fused_ordering(620) 00:18:05.265 fused_ordering(621) 00:18:05.265 fused_ordering(622) 00:18:05.265 fused_ordering(623) 00:18:05.265 fused_ordering(624) 00:18:05.265 fused_ordering(625) 00:18:05.265 fused_ordering(626) 00:18:05.265 fused_ordering(627) 00:18:05.265 fused_ordering(628) 00:18:05.265 fused_ordering(629) 00:18:05.265 fused_ordering(630) 00:18:05.265 fused_ordering(631) 00:18:05.265 fused_ordering(632) 00:18:05.265 fused_ordering(633) 00:18:05.265 fused_ordering(634) 00:18:05.265 fused_ordering(635) 00:18:05.265 fused_ordering(636) 00:18:05.265 fused_ordering(637) 00:18:05.265 fused_ordering(638) 00:18:05.265 fused_ordering(639) 00:18:05.265 fused_ordering(640) 00:18:05.265 fused_ordering(641) 00:18:05.265 fused_ordering(642) 00:18:05.265 fused_ordering(643) 00:18:05.265 fused_ordering(644) 00:18:05.265 fused_ordering(645) 00:18:05.265 fused_ordering(646) 00:18:05.265 fused_ordering(647) 00:18:05.265 fused_ordering(648) 00:18:05.265 fused_ordering(649) 00:18:05.265 fused_ordering(650) 00:18:05.265 fused_ordering(651) 00:18:05.265 fused_ordering(652) 00:18:05.265 fused_ordering(653) 00:18:05.265 fused_ordering(654) 00:18:05.265 fused_ordering(655) 00:18:05.265 fused_ordering(656) 00:18:05.265 fused_ordering(657) 00:18:05.265 fused_ordering(658) 00:18:05.265 fused_ordering(659) 00:18:05.265 fused_ordering(660) 00:18:05.265 fused_ordering(661) 00:18:05.265 fused_ordering(662) 00:18:05.265 fused_ordering(663) 00:18:05.265 fused_ordering(664) 00:18:05.265 fused_ordering(665) 00:18:05.265 fused_ordering(666) 00:18:05.265 fused_ordering(667) 00:18:05.265 fused_ordering(668) 00:18:05.265 fused_ordering(669) 00:18:05.266 fused_ordering(670) 00:18:05.266 fused_ordering(671) 00:18:05.266 fused_ordering(672) 00:18:05.266 fused_ordering(673) 00:18:05.266 fused_ordering(674) 00:18:05.266 fused_ordering(675) 00:18:05.266 fused_ordering(676) 00:18:05.266 fused_ordering(677) 00:18:05.266 fused_ordering(678) 00:18:05.266 fused_ordering(679) 00:18:05.266 fused_ordering(680) 00:18:05.266 fused_ordering(681) 00:18:05.266 fused_ordering(682) 00:18:05.266 fused_ordering(683) 00:18:05.266 fused_ordering(684) 00:18:05.266 fused_ordering(685) 00:18:05.266 fused_ordering(686) 00:18:05.266 fused_ordering(687) 00:18:05.266 fused_ordering(688) 00:18:05.266 fused_ordering(689) 00:18:05.266 fused_ordering(690) 00:18:05.266 fused_ordering(691) 00:18:05.266 fused_ordering(692) 00:18:05.266 fused_ordering(693) 00:18:05.266 fused_ordering(694) 00:18:05.266 fused_ordering(695) 00:18:05.266 fused_ordering(696) 00:18:05.266 fused_ordering(697) 00:18:05.266 fused_ordering(698) 00:18:05.266 fused_ordering(699) 00:18:05.266 fused_ordering(700) 00:18:05.266 fused_ordering(701) 00:18:05.266 fused_ordering(702) 00:18:05.266 fused_ordering(703) 00:18:05.266 fused_ordering(704) 00:18:05.266 fused_ordering(705) 00:18:05.266 fused_ordering(706) 00:18:05.266 fused_ordering(707) 00:18:05.266 fused_ordering(708) 00:18:05.266 fused_ordering(709) 00:18:05.266 fused_ordering(710) 00:18:05.266 fused_ordering(711) 00:18:05.266 fused_ordering(712) 00:18:05.266 fused_ordering(713) 00:18:05.266 fused_ordering(714) 00:18:05.266 fused_ordering(715) 00:18:05.266 fused_ordering(716) 00:18:05.266 fused_ordering(717) 00:18:05.266 fused_ordering(718) 00:18:05.266 fused_ordering(719) 00:18:05.266 fused_ordering(720) 00:18:05.266 fused_ordering(721) 00:18:05.266 fused_ordering(722) 00:18:05.266 fused_ordering(723) 00:18:05.266 fused_ordering(724) 00:18:05.266 fused_ordering(725) 00:18:05.266 fused_ordering(726) 00:18:05.266 fused_ordering(727) 00:18:05.266 fused_ordering(728) 00:18:05.266 fused_ordering(729) 00:18:05.266 fused_ordering(730) 00:18:05.266 fused_ordering(731) 00:18:05.266 fused_ordering(732) 00:18:05.266 fused_ordering(733) 00:18:05.266 fused_ordering(734) 00:18:05.266 fused_ordering(735) 00:18:05.266 fused_ordering(736) 00:18:05.266 fused_ordering(737) 00:18:05.266 fused_ordering(738) 00:18:05.266 fused_ordering(739) 00:18:05.266 fused_ordering(740) 00:18:05.266 fused_ordering(741) 00:18:05.266 fused_ordering(742) 00:18:05.266 fused_ordering(743) 00:18:05.266 fused_ordering(744) 00:18:05.266 fused_ordering(745) 00:18:05.266 fused_ordering(746) 00:18:05.266 fused_ordering(747) 00:18:05.266 fused_ordering(748) 00:18:05.266 fused_ordering(749) 00:18:05.266 fused_ordering(750) 00:18:05.266 fused_ordering(751) 00:18:05.266 fused_ordering(752) 00:18:05.266 fused_ordering(753) 00:18:05.266 fused_ordering(754) 00:18:05.266 fused_ordering(755) 00:18:05.266 fused_ordering(756) 00:18:05.266 fused_ordering(757) 00:18:05.266 fused_ordering(758) 00:18:05.266 fused_ordering(759) 00:18:05.266 fused_ordering(760) 00:18:05.266 fused_ordering(761) 00:18:05.266 fused_ordering(762) 00:18:05.266 fused_ordering(763) 00:18:05.266 fused_ordering(764) 00:18:05.266 fused_ordering(765) 00:18:05.266 fused_ordering(766) 00:18:05.266 fused_ordering(767) 00:18:05.266 fused_ordering(768) 00:18:05.266 fused_ordering(769) 00:18:05.266 fused_ordering(770) 00:18:05.266 fused_ordering(771) 00:18:05.266 fused_ordering(772) 00:18:05.266 fused_ordering(773) 00:18:05.266 fused_ordering(774) 00:18:05.266 fused_ordering(775) 00:18:05.266 fused_ordering(776) 00:18:05.266 fused_ordering(777) 00:18:05.266 fused_ordering(778) 00:18:05.266 fused_ordering(779) 00:18:05.266 fused_ordering(780) 00:18:05.266 fused_ordering(781) 00:18:05.266 fused_ordering(782) 00:18:05.266 fused_ordering(783) 00:18:05.266 fused_ordering(784) 00:18:05.266 fused_ordering(785) 00:18:05.266 fused_ordering(786) 00:18:05.266 fused_ordering(787) 00:18:05.266 fused_ordering(788) 00:18:05.266 fused_ordering(789) 00:18:05.266 fused_ordering(790) 00:18:05.266 fused_ordering(791) 00:18:05.266 fused_ordering(792) 00:18:05.266 fused_ordering(793) 00:18:05.266 fused_ordering(794) 00:18:05.266 fused_ordering(795) 00:18:05.266 fused_ordering(796) 00:18:05.266 fused_ordering(797) 00:18:05.266 fused_ordering(798) 00:18:05.266 fused_ordering(799) 00:18:05.266 fused_ordering(800) 00:18:05.266 fused_ordering(801) 00:18:05.266 fused_ordering(802) 00:18:05.266 fused_ordering(803) 00:18:05.266 fused_ordering(804) 00:18:05.266 fused_ordering(805) 00:18:05.266 fused_ordering(806) 00:18:05.266 fused_ordering(807) 00:18:05.266 fused_ordering(808) 00:18:05.266 fused_ordering(809) 00:18:05.266 fused_ordering(810) 00:18:05.266 fused_ordering(811) 00:18:05.266 fused_ordering(812) 00:18:05.266 fused_ordering(813) 00:18:05.266 fused_ordering(814) 00:18:05.266 fused_ordering(815) 00:18:05.266 fused_ordering(816) 00:18:05.266 fused_ordering(817) 00:18:05.266 fused_ordering(818) 00:18:05.266 fused_ordering(819) 00:18:05.266 fused_ordering(820) 00:18:06.211 fused_ordering(821) 00:18:06.211 fused_ordering(822) 00:18:06.211 fused_ordering(823) 00:18:06.211 fused_ordering(824) 00:18:06.211 fused_ordering(825) 00:18:06.211 fused_ordering(826) 00:18:06.211 fused_ordering(827) 00:18:06.211 fused_ordering(828) 00:18:06.211 fused_ordering(829) 00:18:06.211 fused_ordering(830) 00:18:06.211 fused_ordering(831) 00:18:06.211 fused_ordering(832) 00:18:06.211 fused_ordering(833) 00:18:06.211 fused_ordering(834) 00:18:06.211 fused_ordering(835) 00:18:06.211 fused_ordering(836) 00:18:06.211 fused_ordering(837) 00:18:06.211 fused_ordering(838) 00:18:06.211 fused_ordering(839) 00:18:06.211 fused_ordering(840) 00:18:06.211 fused_ordering(841) 00:18:06.211 fused_ordering(842) 00:18:06.211 fused_ordering(843) 00:18:06.211 fused_ordering(844) 00:18:06.211 fused_ordering(845) 00:18:06.211 fused_ordering(846) 00:18:06.211 fused_ordering(847) 00:18:06.211 fused_ordering(848) 00:18:06.211 fused_ordering(849) 00:18:06.211 fused_ordering(850) 00:18:06.211 fused_ordering(851) 00:18:06.211 fused_ordering(852) 00:18:06.211 fused_ordering(853) 00:18:06.211 fused_ordering(854) 00:18:06.211 fused_ordering(855) 00:18:06.211 fused_ordering(856) 00:18:06.211 fused_ordering(857) 00:18:06.211 fused_ordering(858) 00:18:06.211 fused_ordering(859) 00:18:06.211 fused_ordering(860) 00:18:06.211 fused_ordering(861) 00:18:06.211 fused_ordering(862) 00:18:06.211 fused_ordering(863) 00:18:06.211 fused_ordering(864) 00:18:06.211 fused_ordering(865) 00:18:06.211 fused_ordering(866) 00:18:06.211 fused_ordering(867) 00:18:06.211 fused_ordering(868) 00:18:06.211 fused_ordering(869) 00:18:06.211 fused_ordering(870) 00:18:06.211 fused_ordering(871) 00:18:06.211 fused_ordering(872) 00:18:06.211 fused_ordering(873) 00:18:06.211 fused_ordering(874) 00:18:06.211 fused_ordering(875) 00:18:06.211 fused_ordering(876) 00:18:06.211 fused_ordering(877) 00:18:06.211 fused_ordering(878) 00:18:06.211 fused_ordering(879) 00:18:06.211 fused_ordering(880) 00:18:06.211 fused_ordering(881) 00:18:06.211 fused_ordering(882) 00:18:06.211 fused_ordering(883) 00:18:06.211 fused_ordering(884) 00:18:06.211 fused_ordering(885) 00:18:06.211 fused_ordering(886) 00:18:06.211 fused_ordering(887) 00:18:06.211 fused_ordering(888) 00:18:06.211 fused_ordering(889) 00:18:06.211 fused_ordering(890) 00:18:06.211 fused_ordering(891) 00:18:06.211 fused_ordering(892) 00:18:06.211 fused_ordering(893) 00:18:06.211 fused_ordering(894) 00:18:06.211 fused_ordering(895) 00:18:06.211 fused_ordering(896) 00:18:06.211 fused_ordering(897) 00:18:06.211 fused_ordering(898) 00:18:06.211 fused_ordering(899) 00:18:06.211 fused_ordering(900) 00:18:06.211 fused_ordering(901) 00:18:06.211 fused_ordering(902) 00:18:06.211 fused_ordering(903) 00:18:06.211 fused_ordering(904) 00:18:06.211 fused_ordering(905) 00:18:06.211 fused_ordering(906) 00:18:06.211 fused_ordering(907) 00:18:06.211 fused_ordering(908) 00:18:06.211 fused_ordering(909) 00:18:06.211 fused_ordering(910) 00:18:06.211 fused_ordering(911) 00:18:06.211 fused_ordering(912) 00:18:06.211 fused_ordering(913) 00:18:06.211 fused_ordering(914) 00:18:06.211 fused_ordering(915) 00:18:06.211 fused_ordering(916) 00:18:06.211 fused_ordering(917) 00:18:06.211 fused_ordering(918) 00:18:06.211 fused_ordering(919) 00:18:06.211 fused_ordering(920) 00:18:06.211 fused_ordering(921) 00:18:06.211 fused_ordering(922) 00:18:06.211 fused_ordering(923) 00:18:06.211 fused_ordering(924) 00:18:06.211 fused_ordering(925) 00:18:06.211 fused_ordering(926) 00:18:06.211 fused_ordering(927) 00:18:06.211 fused_ordering(928) 00:18:06.211 fused_ordering(929) 00:18:06.211 fused_ordering(930) 00:18:06.211 fused_ordering(931) 00:18:06.211 fused_ordering(932) 00:18:06.211 fused_ordering(933) 00:18:06.211 fused_ordering(934) 00:18:06.211 fused_ordering(935) 00:18:06.211 fused_ordering(936) 00:18:06.211 fused_ordering(937) 00:18:06.211 fused_ordering(938) 00:18:06.211 fused_ordering(939) 00:18:06.211 fused_ordering(940) 00:18:06.211 fused_ordering(941) 00:18:06.211 fused_ordering(942) 00:18:06.211 fused_ordering(943) 00:18:06.211 fused_ordering(944) 00:18:06.211 fused_ordering(945) 00:18:06.211 fused_ordering(946) 00:18:06.211 fused_ordering(947) 00:18:06.211 fused_ordering(948) 00:18:06.211 fused_ordering(949) 00:18:06.211 fused_ordering(950) 00:18:06.211 fused_ordering(951) 00:18:06.211 fused_ordering(952) 00:18:06.211 fused_ordering(953) 00:18:06.211 fused_ordering(954) 00:18:06.211 fused_ordering(955) 00:18:06.211 fused_ordering(956) 00:18:06.211 fused_ordering(957) 00:18:06.211 fused_ordering(958) 00:18:06.211 fused_ordering(959) 00:18:06.211 fused_ordering(960) 00:18:06.211 fused_ordering(961) 00:18:06.211 fused_ordering(962) 00:18:06.211 fused_ordering(963) 00:18:06.211 fused_ordering(964) 00:18:06.211 fused_ordering(965) 00:18:06.211 fused_ordering(966) 00:18:06.211 fused_ordering(967) 00:18:06.211 fused_ordering(968) 00:18:06.211 fused_ordering(969) 00:18:06.211 fused_ordering(970) 00:18:06.211 fused_ordering(971) 00:18:06.211 fused_ordering(972) 00:18:06.211 fused_ordering(973) 00:18:06.211 fused_ordering(974) 00:18:06.211 fused_ordering(975) 00:18:06.211 fused_ordering(976) 00:18:06.211 fused_ordering(977) 00:18:06.211 fused_ordering(978) 00:18:06.211 fused_ordering(979) 00:18:06.211 fused_ordering(980) 00:18:06.211 fused_ordering(981) 00:18:06.211 fused_ordering(982) 00:18:06.211 fused_ordering(983) 00:18:06.211 fused_ordering(984) 00:18:06.211 fused_ordering(985) 00:18:06.211 fused_ordering(986) 00:18:06.211 fused_ordering(987) 00:18:06.211 fused_ordering(988) 00:18:06.211 fused_ordering(989) 00:18:06.211 fused_ordering(990) 00:18:06.211 fused_ordering(991) 00:18:06.211 fused_ordering(992) 00:18:06.211 fused_ordering(993) 00:18:06.211 fused_ordering(994) 00:18:06.211 fused_ordering(995) 00:18:06.211 fused_ordering(996) 00:18:06.211 fused_ordering(997) 00:18:06.211 fused_ordering(998) 00:18:06.211 fused_ordering(999) 00:18:06.211 fused_ordering(1000) 00:18:06.211 fused_ordering(1001) 00:18:06.211 fused_ordering(1002) 00:18:06.211 fused_ordering(1003) 00:18:06.211 fused_ordering(1004) 00:18:06.211 fused_ordering(1005) 00:18:06.211 fused_ordering(1006) 00:18:06.211 fused_ordering(1007) 00:18:06.211 fused_ordering(1008) 00:18:06.212 fused_ordering(1009) 00:18:06.212 fused_ordering(1010) 00:18:06.212 fused_ordering(1011) 00:18:06.212 fused_ordering(1012) 00:18:06.212 fused_ordering(1013) 00:18:06.212 fused_ordering(1014) 00:18:06.212 fused_ordering(1015) 00:18:06.212 fused_ordering(1016) 00:18:06.212 fused_ordering(1017) 00:18:06.212 fused_ordering(1018) 00:18:06.212 fused_ordering(1019) 00:18:06.212 fused_ordering(1020) 00:18:06.212 fused_ordering(1021) 00:18:06.212 fused_ordering(1022) 00:18:06.212 fused_ordering(1023) 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:06.212 rmmod nvme_tcp 00:18:06.212 rmmod nvme_fabrics 00:18:06.212 rmmod nvme_keyring 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2981264 ']' 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2981264 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2981264 ']' 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2981264 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2981264 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2981264' 00:18:06.212 killing process with pid 2981264 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2981264 00:18:06.212 21:07:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2981264 00:18:07.589 21:07:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:07.589 21:07:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:07.589 21:07:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:07.589 21:07:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:07.589 21:07:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:07.589 21:07:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:07.589 21:07:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:07.589 21:07:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:07.589 21:07:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:07.589 21:07:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.589 21:07:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:07.589 21:07:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:09.497 00:18:09.497 real 0m10.109s 00:18:09.497 user 0m8.361s 00:18:09.497 sys 0m3.558s 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:09.497 ************************************ 00:18:09.497 END TEST nvmf_fused_ordering 00:18:09.497 ************************************ 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:09.497 ************************************ 00:18:09.497 START TEST nvmf_ns_masking 00:18:09.497 ************************************ 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:09.497 * Looking for test storage... 00:18:09.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:09.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.497 --rc genhtml_branch_coverage=1 00:18:09.497 --rc genhtml_function_coverage=1 00:18:09.497 --rc genhtml_legend=1 00:18:09.497 --rc geninfo_all_blocks=1 00:18:09.497 --rc geninfo_unexecuted_blocks=1 00:18:09.497 00:18:09.497 ' 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:09.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.497 --rc genhtml_branch_coverage=1 00:18:09.497 --rc genhtml_function_coverage=1 00:18:09.497 --rc genhtml_legend=1 00:18:09.497 --rc geninfo_all_blocks=1 00:18:09.497 --rc geninfo_unexecuted_blocks=1 00:18:09.497 00:18:09.497 ' 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:09.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.497 --rc genhtml_branch_coverage=1 00:18:09.497 --rc genhtml_function_coverage=1 00:18:09.497 --rc genhtml_legend=1 00:18:09.497 --rc geninfo_all_blocks=1 00:18:09.497 --rc geninfo_unexecuted_blocks=1 00:18:09.497 00:18:09.497 ' 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:09.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.497 --rc genhtml_branch_coverage=1 00:18:09.497 --rc genhtml_function_coverage=1 00:18:09.497 --rc genhtml_legend=1 00:18:09.497 --rc geninfo_all_blocks=1 00:18:09.497 --rc geninfo_unexecuted_blocks=1 00:18:09.497 00:18:09.497 ' 00:18:09.497 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:09.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:09.498 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=fea36513-fcd4-4da0-8ac5-9e9c17fb52b0 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=67e97b5c-55a1-4d92-9d02-6159c536f5b6 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=659c5348-6e3c-459c-ac10-64f4238bbe18 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:09.757 21:07:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:11.662 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:11.662 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:11.662 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:11.662 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:11.662 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:11.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:18:11.663 00:18:11.663 --- 10.0.0.2 ping statistics --- 00:18:11.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.663 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:11.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:18:11.663 00:18:11.663 --- 10.0.0.1 ping statistics --- 00:18:11.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.663 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:11.663 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:11.921 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:11.921 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:11.921 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:11.921 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:11.921 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2983904 00:18:11.921 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:11.921 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2983904 00:18:11.921 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2983904 ']' 00:18:11.921 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.921 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.922 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.922 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.922 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:11.922 [2024-11-19 21:07:45.563992] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:18:11.922 [2024-11-19 21:07:45.564160] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.180 [2024-11-19 21:07:45.719249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.180 [2024-11-19 21:07:45.859576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.180 [2024-11-19 21:07:45.859676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.180 [2024-11-19 21:07:45.859702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:12.180 [2024-11-19 21:07:45.859727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:12.180 [2024-11-19 21:07:45.859747] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.180 [2024-11-19 21:07:45.861447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.747 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.747 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:12.747 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:12.747 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:12.747 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:12.747 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.747 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:13.005 [2024-11-19 21:07:46.781103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.263 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:13.263 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:13.263 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:13.521 Malloc1 00:18:13.521 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:13.779 Malloc2 00:18:13.779 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:14.038 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:14.296 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:14.554 [2024-11-19 21:07:48.334838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.812 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:14.812 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 659c5348-6e3c-459c-ac10-64f4238bbe18 -a 10.0.0.2 -s 4420 -i 4 00:18:14.812 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:14.812 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:14.812 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:14.812 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:14.812 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:17.341 21:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:17.341 21:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:17.341 21:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:17.341 21:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:17.341 21:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:17.341 21:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:17.341 21:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:17.341 21:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:17.341 21:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:17.341 21:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:17.341 21:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:17.341 21:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.341 21:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:17.341 [ 0]:0x1 00:18:17.341 21:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:17.341 21:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.341 21:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=051de49e613348c0bcb40c2ea4767a0b 00:18:17.341 21:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 051de49e613348c0bcb40c2ea4767a0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.341 21:07:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:17.341 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:17.341 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.341 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:17.341 [ 0]:0x1 00:18:17.341 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:17.341 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.341 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=051de49e613348c0bcb40c2ea4767a0b 00:18:17.341 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 051de49e613348c0bcb40c2ea4767a0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.341 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:17.341 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.341 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:17.341 [ 1]:0x2 00:18:17.341 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:17.341 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.600 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7a5ddf450b024d108cd9da072a08ae33 00:18:17.600 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7a5ddf450b024d108cd9da072a08ae33 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.600 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:17.600 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:17.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.600 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:17.859 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:18.117 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:18.117 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 659c5348-6e3c-459c-ac10-64f4238bbe18 -a 10.0.0.2 -s 4420 -i 4 00:18:18.375 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:18.375 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:18.375 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:18.375 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:18.375 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:18.375 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:20.346 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:20.346 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:20.347 [ 0]:0x2 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:20.347 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:20.605 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7a5ddf450b024d108cd9da072a08ae33 00:18:20.605 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7a5ddf450b024d108cd9da072a08ae33 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:20.605 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:20.864 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:20.864 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:20.864 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:20.864 [ 0]:0x1 00:18:20.864 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:20.864 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:20.864 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=051de49e613348c0bcb40c2ea4767a0b 00:18:20.864 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 051de49e613348c0bcb40c2ea4767a0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:20.864 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:20.864 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:20.864 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:20.864 [ 1]:0x2 00:18:20.864 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:20.864 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:20.864 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7a5ddf450b024d108cd9da072a08ae33 00:18:20.864 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7a5ddf450b024d108cd9da072a08ae33 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:20.864 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:21.123 [ 0]:0x2 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:21.123 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.382 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7a5ddf450b024d108cd9da072a08ae33 00:18:21.382 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7a5ddf450b024d108cd9da072a08ae33 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.382 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:21.382 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:21.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:21.382 21:07:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:21.642 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:21.643 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 659c5348-6e3c-459c-ac10-64f4238bbe18 -a 10.0.0.2 -s 4420 -i 4 00:18:21.903 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:21.903 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:21.903 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.903 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:21.903 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:21.903 21:07:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:23.811 [ 0]:0x1 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=051de49e613348c0bcb40c2ea4767a0b 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 051de49e613348c0bcb40c2ea4767a0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:23.811 [ 1]:0x2 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:23.811 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:24.071 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7a5ddf450b024d108cd9da072a08ae33 00:18:24.071 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7a5ddf450b024d108cd9da072a08ae33 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:24.071 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:24.331 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:24.331 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:24.331 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:24.331 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:24.331 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.331 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:24.331 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.331 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:24.331 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:24.331 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:24.331 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:24.331 21:07:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:24.331 [ 0]:0x2 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7a5ddf450b024d108cd9da072a08ae33 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7a5ddf450b024d108cd9da072a08ae33 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:24.331 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:24.590 [2024-11-19 21:07:58.363670] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:24.590 request: 00:18:24.590 { 00:18:24.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.590 "nsid": 2, 00:18:24.590 "host": "nqn.2016-06.io.spdk:host1", 00:18:24.590 "method": "nvmf_ns_remove_host", 00:18:24.590 "req_id": 1 00:18:24.590 } 00:18:24.590 Got JSON-RPC error response 00:18:24.590 response: 00:18:24.590 { 00:18:24.590 "code": -32602, 00:18:24.590 "message": "Invalid parameters" 00:18:24.590 } 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:24.851 [ 0]:0x2 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7a5ddf450b024d108cd9da072a08ae33 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7a5ddf450b024d108cd9da072a08ae33 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:24.851 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:25.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:25.110 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:25.110 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2985660 00:18:25.110 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.110 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2985660 /var/tmp/host.sock 00:18:25.110 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2985660 ']' 00:18:25.110 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:25.110 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.110 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:25.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:25.110 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.110 21:07:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:25.110 [2024-11-19 21:07:58.766571] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:18:25.110 [2024-11-19 21:07:58.766706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2985660 ] 00:18:25.369 [2024-11-19 21:07:58.908658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.369 [2024-11-19 21:07:59.043899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.303 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.303 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:26.303 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:26.561 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:26.818 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid fea36513-fcd4-4da0-8ac5-9e9c17fb52b0 00:18:26.818 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:26.818 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g FEA36513FCD44DA08AC59E9C17FB52B0 -i 00:18:27.385 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 67e97b5c-55a1-4d92-9d02-6159c536f5b6 00:18:27.385 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:27.385 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 67E97B5C55A14D929D026159C536F5B6 -i 00:18:27.385 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:27.643 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:28.208 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:28.208 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:28.466 nvme0n1 00:18:28.466 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:28.466 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:29.033 nvme1n2 00:18:29.033 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:29.033 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:29.033 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:29.033 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:29.033 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:29.291 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:29.291 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:29.291 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:29.291 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:29.550 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ fea36513-fcd4-4da0-8ac5-9e9c17fb52b0 == \f\e\a\3\6\5\1\3\-\f\c\d\4\-\4\d\a\0\-\8\a\c\5\-\9\e\9\c\1\7\f\b\5\2\b\0 ]] 00:18:29.550 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:29.550 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:29.550 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:29.808 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 67e97b5c-55a1-4d92-9d02-6159c536f5b6 == \6\7\e\9\7\b\5\c\-\5\5\a\1\-\4\d\9\2\-\9\d\0\2\-\6\1\5\9\c\5\3\6\f\5\b\6 ]] 00:18:29.808 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:30.066 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:30.324 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid fea36513-fcd4-4da0-8ac5-9e9c17fb52b0 00:18:30.324 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:30.324 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g FEA36513FCD44DA08AC59E9C17FB52B0 00:18:30.324 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:30.324 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g FEA36513FCD44DA08AC59E9C17FB52B0 00:18:30.324 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:30.324 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.324 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:30.324 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.324 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:30.324 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.324 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:30.324 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:30.324 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g FEA36513FCD44DA08AC59E9C17FB52B0 00:18:30.582 [2024-11-19 21:08:04.300867] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:30.582 [2024-11-19 21:08:04.300930] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:30.582 [2024-11-19 21:08:04.300969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.582 request: 00:18:30.582 { 00:18:30.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.582 "namespace": { 00:18:30.582 "bdev_name": "invalid", 00:18:30.582 "nsid": 1, 00:18:30.582 "nguid": "FEA36513FCD44DA08AC59E9C17FB52B0", 00:18:30.582 "no_auto_visible": false 00:18:30.582 }, 00:18:30.582 "method": "nvmf_subsystem_add_ns", 00:18:30.582 "req_id": 1 00:18:30.582 } 00:18:30.583 Got JSON-RPC error response 00:18:30.583 response: 00:18:30.583 { 00:18:30.583 "code": -32602, 00:18:30.583 "message": "Invalid parameters" 00:18:30.583 } 00:18:30.583 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:30.583 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:30.583 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:30.583 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:30.583 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid fea36513-fcd4-4da0-8ac5-9e9c17fb52b0 00:18:30.583 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:30.583 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g FEA36513FCD44DA08AC59E9C17FB52B0 -i 00:18:30.841 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:33.387 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:33.387 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:33.387 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:33.387 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:33.387 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2985660 00:18:33.387 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2985660 ']' 00:18:33.387 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2985660 00:18:33.387 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:33.387 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.387 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2985660 00:18:33.387 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:33.387 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:33.387 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2985660' 00:18:33.387 killing process with pid 2985660 00:18:33.387 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2985660 00:18:33.387 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2985660 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:35.930 rmmod nvme_tcp 00:18:35.930 rmmod nvme_fabrics 00:18:35.930 rmmod nvme_keyring 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2983904 ']' 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2983904 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2983904 ']' 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2983904 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2983904 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2983904' 00:18:35.930 killing process with pid 2983904 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2983904 00:18:35.930 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2983904 00:18:37.838 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:37.838 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:37.838 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:37.838 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:37.838 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:37.838 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:37.838 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:37.838 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:37.838 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:37.838 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.838 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.838 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:39.750 00:18:39.750 real 0m30.040s 00:18:39.750 user 0m44.772s 00:18:39.750 sys 0m4.939s 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:39.750 ************************************ 00:18:39.750 END TEST nvmf_ns_masking 00:18:39.750 ************************************ 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:39.750 ************************************ 00:18:39.750 START TEST nvmf_nvme_cli 00:18:39.750 ************************************ 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:39.750 * Looking for test storage... 00:18:39.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:39.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.750 --rc genhtml_branch_coverage=1 00:18:39.750 --rc genhtml_function_coverage=1 00:18:39.750 --rc genhtml_legend=1 00:18:39.750 --rc geninfo_all_blocks=1 00:18:39.750 --rc geninfo_unexecuted_blocks=1 00:18:39.750 00:18:39.750 ' 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:39.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.750 --rc genhtml_branch_coverage=1 00:18:39.750 --rc genhtml_function_coverage=1 00:18:39.750 --rc genhtml_legend=1 00:18:39.750 --rc geninfo_all_blocks=1 00:18:39.750 --rc geninfo_unexecuted_blocks=1 00:18:39.750 00:18:39.750 ' 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:39.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.750 --rc genhtml_branch_coverage=1 00:18:39.750 --rc genhtml_function_coverage=1 00:18:39.750 --rc genhtml_legend=1 00:18:39.750 --rc geninfo_all_blocks=1 00:18:39.750 --rc geninfo_unexecuted_blocks=1 00:18:39.750 00:18:39.750 ' 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:39.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.750 --rc genhtml_branch_coverage=1 00:18:39.750 --rc genhtml_function_coverage=1 00:18:39.750 --rc genhtml_legend=1 00:18:39.750 --rc geninfo_all_blocks=1 00:18:39.750 --rc geninfo_unexecuted_blocks=1 00:18:39.750 00:18:39.750 ' 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:39.750 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:39.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:39.751 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:41.659 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:41.659 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:41.659 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:41.660 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:41.660 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:41.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:41.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:18:41.660 00:18:41.660 --- 10.0.0.2 ping statistics --- 00:18:41.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.660 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:41.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:41.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:18:41.660 00:18:41.660 --- 10.0.0.1 ping statistics --- 00:18:41.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.660 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:41.660 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:41.919 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2988985 00:18:41.919 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:41.919 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2988985 00:18:41.919 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2988985 ']' 00:18:41.919 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.919 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.919 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.919 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.919 21:08:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:41.919 [2024-11-19 21:08:15.548351] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:18:41.919 [2024-11-19 21:08:15.548503] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.919 [2024-11-19 21:08:15.702637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:42.178 [2024-11-19 21:08:15.848979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.178 [2024-11-19 21:08:15.849065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.178 [2024-11-19 21:08:15.849107] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.178 [2024-11-19 21:08:15.849133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.178 [2024-11-19 21:08:15.849153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.178 [2024-11-19 21:08:15.852025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.178 [2024-11-19 21:08:15.852098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.178 [2024-11-19 21:08:15.852137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.178 [2024-11-19 21:08:15.852141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:42.745 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.745 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:42.745 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:42.745 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:42.745 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:42.745 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.745 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:42.745 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.745 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:42.745 [2024-11-19 21:08:16.533176] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.003 Malloc0 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.003 Malloc1 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.003 [2024-11-19 21:08:16.731882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.003 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:18:43.263 00:18:43.263 Discovery Log Number of Records 2, Generation counter 2 00:18:43.263 =====Discovery Log Entry 0====== 00:18:43.263 trtype: tcp 00:18:43.263 adrfam: ipv4 00:18:43.263 subtype: current discovery subsystem 00:18:43.263 treq: not required 00:18:43.263 portid: 0 00:18:43.263 trsvcid: 4420 00:18:43.263 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:43.263 traddr: 10.0.0.2 00:18:43.263 eflags: explicit discovery connections, duplicate discovery information 00:18:43.263 sectype: none 00:18:43.263 =====Discovery Log Entry 1====== 00:18:43.263 trtype: tcp 00:18:43.263 adrfam: ipv4 00:18:43.263 subtype: nvme subsystem 00:18:43.263 treq: not required 00:18:43.263 portid: 0 00:18:43.263 trsvcid: 4420 00:18:43.263 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:43.263 traddr: 10.0.0.2 00:18:43.263 eflags: none 00:18:43.263 sectype: none 00:18:43.263 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:43.263 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:43.263 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:43.263 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:43.263 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:43.263 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:43.263 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:43.263 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:43.263 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:43.263 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:43.263 21:08:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:43.833 21:08:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:43.833 21:08:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:43.833 21:08:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:43.833 21:08:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:43.833 21:08:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:43.833 21:08:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:46.362 /dev/nvme0n2 ]] 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:46.362 21:08:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:46.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:46.362 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:46.362 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:46.362 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:46.362 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:46.362 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:46.362 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:46.362 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:46.362 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:46.362 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:46.362 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.362 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:46.362 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.362 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:46.362 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:46.362 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:46.362 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:46.362 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:46.362 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:46.362 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:46.362 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:46.362 rmmod nvme_tcp 00:18:46.362 rmmod nvme_fabrics 00:18:46.362 rmmod nvme_keyring 00:18:46.622 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:46.622 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:46.622 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:46.622 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2988985 ']' 00:18:46.622 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2988985 00:18:46.622 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2988985 ']' 00:18:46.622 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2988985 00:18:46.622 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:46.622 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.622 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2988985 00:18:46.622 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.622 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.622 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2988985' 00:18:46.622 killing process with pid 2988985 00:18:46.622 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2988985 00:18:46.622 21:08:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2988985 00:18:47.999 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:47.999 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:47.999 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:47.999 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:47.999 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:47.999 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:47.999 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:47.999 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:47.999 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:47.999 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.999 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:47.999 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.908 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:49.908 00:18:49.908 real 0m10.470s 00:18:49.908 user 0m22.674s 00:18:49.908 sys 0m2.457s 00:18:49.908 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:49.908 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:49.908 ************************************ 00:18:49.908 END TEST nvmf_nvme_cli 00:18:49.908 ************************************ 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:50.167 ************************************ 00:18:50.167 START TEST nvmf_auth_target 00:18:50.167 ************************************ 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:50.167 * Looking for test storage... 00:18:50.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:50.167 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:50.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.168 --rc genhtml_branch_coverage=1 00:18:50.168 --rc genhtml_function_coverage=1 00:18:50.168 --rc genhtml_legend=1 00:18:50.168 --rc geninfo_all_blocks=1 00:18:50.168 --rc geninfo_unexecuted_blocks=1 00:18:50.168 00:18:50.168 ' 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:50.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.168 --rc genhtml_branch_coverage=1 00:18:50.168 --rc genhtml_function_coverage=1 00:18:50.168 --rc genhtml_legend=1 00:18:50.168 --rc geninfo_all_blocks=1 00:18:50.168 --rc geninfo_unexecuted_blocks=1 00:18:50.168 00:18:50.168 ' 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:50.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.168 --rc genhtml_branch_coverage=1 00:18:50.168 --rc genhtml_function_coverage=1 00:18:50.168 --rc genhtml_legend=1 00:18:50.168 --rc geninfo_all_blocks=1 00:18:50.168 --rc geninfo_unexecuted_blocks=1 00:18:50.168 00:18:50.168 ' 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:50.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.168 --rc genhtml_branch_coverage=1 00:18:50.168 --rc genhtml_function_coverage=1 00:18:50.168 --rc genhtml_legend=1 00:18:50.168 --rc geninfo_all_blocks=1 00:18:50.168 --rc geninfo_unexecuted_blocks=1 00:18:50.168 00:18:50.168 ' 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.168 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:50.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:50.169 21:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.072 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:52.072 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:52.072 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:52.072 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:52.072 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:52.072 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:52.072 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:52.072 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:52.072 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:52.072 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:52.072 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:52.072 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:52.072 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:52.072 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:52.072 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:52.072 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.072 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.072 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.072 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.072 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.072 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:52.073 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:52.073 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:52.073 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:52.073 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.073 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:52.334 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:52.334 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:52.334 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:52.334 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:52.334 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:52.334 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:52.334 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:52.334 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:52.334 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:52.334 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:52.334 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:52.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:18:52.334 00:18:52.334 --- 10.0.0.2 ping statistics --- 00:18:52.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.334 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:18:52.334 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:52.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:18:52.334 00:18:52.334 --- 10.0.0.1 ping statistics --- 00:18:52.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.334 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:18:52.334 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.334 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:18:52.334 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:52.334 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.334 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:52.334 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:52.334 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.334 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:52.334 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:52.334 21:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:52.334 21:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:52.334 21:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.334 21:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.334 21:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2991736 00:18:52.334 21:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:52.334 21:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2991736 00:18:52.334 21:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2991736 ']' 00:18:52.334 21:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.334 21:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.334 21:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.334 21:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.334 21:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2991890 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=61b0793fc99201b73a54916e5a4531e256ca9e7660da7f2b 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.oj4 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 61b0793fc99201b73a54916e5a4531e256ca9e7660da7f2b 0 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 61b0793fc99201b73a54916e5a4531e256ca9e7660da7f2b 0 00:18:53.715 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=61b0793fc99201b73a54916e5a4531e256ca9e7660da7f2b 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.oj4 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.oj4 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.oj4 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5d7dec626977c78dcf3bbac9ebdc9010e5f25e3ad2ba18b2c3aa6b8d93b15913 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.YrW 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5d7dec626977c78dcf3bbac9ebdc9010e5f25e3ad2ba18b2c3aa6b8d93b15913 3 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5d7dec626977c78dcf3bbac9ebdc9010e5f25e3ad2ba18b2c3aa6b8d93b15913 3 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5d7dec626977c78dcf3bbac9ebdc9010e5f25e3ad2ba18b2c3aa6b8d93b15913 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.YrW 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.YrW 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.YrW 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ae5aacb0386af3bd824fa15d3555fe06 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.JbH 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ae5aacb0386af3bd824fa15d3555fe06 1 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ae5aacb0386af3bd824fa15d3555fe06 1 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ae5aacb0386af3bd824fa15d3555fe06 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.JbH 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.JbH 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.JbH 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bb91bd38943a45841ffcc4c2aee8e36e54b961e62083f60b 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.9QI 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bb91bd38943a45841ffcc4c2aee8e36e54b961e62083f60b 2 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bb91bd38943a45841ffcc4c2aee8e36e54b961e62083f60b 2 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bb91bd38943a45841ffcc4c2aee8e36e54b961e62083f60b 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.9QI 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.9QI 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.9QI 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0e3b013715640eed50844439612bee5812807c2d0dbc8beb 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.yXV 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0e3b013715640eed50844439612bee5812807c2d0dbc8beb 2 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0e3b013715640eed50844439612bee5812807c2d0dbc8beb 2 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0e3b013715640eed50844439612bee5812807c2d0dbc8beb 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.yXV 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.yXV 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.yXV 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=53a8764859e5f5fc869d05d4b1c27ecc 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.kkZ 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 53a8764859e5f5fc869d05d4b1c27ecc 1 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 53a8764859e5f5fc869d05d4b1c27ecc 1 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=53a8764859e5f5fc869d05d4b1c27ecc 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:53.716 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:53.717 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.kkZ 00:18:53.717 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.kkZ 00:18:53.717 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.kkZ 00:18:53.717 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:53.717 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:53.717 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:53.717 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:53.717 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:53.717 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:53.717 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:53.717 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6f6afb48c4ad3d35e99ec1222d1f5618c2f99aabf935c6c9a7f004e03ec8e399 00:18:53.717 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:53.717 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.IWH 00:18:53.717 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6f6afb48c4ad3d35e99ec1222d1f5618c2f99aabf935c6c9a7f004e03ec8e399 3 00:18:53.717 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6f6afb48c4ad3d35e99ec1222d1f5618c2f99aabf935c6c9a7f004e03ec8e399 3 00:18:53.717 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:53.717 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:53.717 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6f6afb48c4ad3d35e99ec1222d1f5618c2f99aabf935c6c9a7f004e03ec8e399 00:18:53.717 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:53.717 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:53.975 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.IWH 00:18:53.975 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.IWH 00:18:53.975 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.IWH 00:18:53.975 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:53.975 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2991736 00:18:53.975 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2991736 ']' 00:18:53.975 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.975 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.975 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.975 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.975 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.233 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.234 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:54.234 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2991890 /var/tmp/host.sock 00:18:54.234 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2991890 ']' 00:18:54.234 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:54.234 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.234 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:54.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:54.234 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.234 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.907 21:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.907 21:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:54.907 21:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:54.908 21:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.908 21:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.908 21:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.908 21:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:54.908 21:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oj4 00:18:54.908 21:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.908 21:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.908 21:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.908 21:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.oj4 00:18:54.908 21:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.oj4 00:18:55.165 21:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.YrW ]] 00:18:55.165 21:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YrW 00:18:55.165 21:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.165 21:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.165 21:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.165 21:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YrW 00:18:55.165 21:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YrW 00:18:55.423 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:55.423 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.JbH 00:18:55.423 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.423 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.423 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.423 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.JbH 00:18:55.423 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.JbH 00:18:55.681 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.9QI ]] 00:18:55.681 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9QI 00:18:55.681 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.681 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.681 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.681 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9QI 00:18:55.681 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9QI 00:18:55.939 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:55.939 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.yXV 00:18:55.939 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.939 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.939 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.939 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.yXV 00:18:55.939 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.yXV 00:18:56.505 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.kkZ ]] 00:18:56.505 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kkZ 00:18:56.505 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.505 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.505 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.505 21:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kkZ 00:18:56.505 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kkZ 00:18:56.505 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:56.505 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.IWH 00:18:56.505 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.505 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.505 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.505 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.IWH 00:18:56.505 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.IWH 00:18:57.073 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:57.073 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:57.073 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.073 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.073 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:57.073 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:57.073 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:57.073 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.073 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:57.073 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:57.073 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:57.073 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.073 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.073 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.073 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.073 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.073 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.073 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.073 21:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.640 00:18:57.640 21:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.640 21:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.640 21:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.898 21:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.898 21:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.898 21:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.898 21:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.898 21:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.898 21:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.898 { 00:18:57.898 "cntlid": 1, 00:18:57.898 "qid": 0, 00:18:57.898 "state": "enabled", 00:18:57.898 "thread": "nvmf_tgt_poll_group_000", 00:18:57.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:57.898 "listen_address": { 00:18:57.898 "trtype": "TCP", 00:18:57.898 "adrfam": "IPv4", 00:18:57.898 "traddr": "10.0.0.2", 00:18:57.898 "trsvcid": "4420" 00:18:57.898 }, 00:18:57.898 "peer_address": { 00:18:57.898 "trtype": "TCP", 00:18:57.898 "adrfam": "IPv4", 00:18:57.898 "traddr": "10.0.0.1", 00:18:57.898 "trsvcid": "57984" 00:18:57.898 }, 00:18:57.898 "auth": { 00:18:57.898 "state": "completed", 00:18:57.898 "digest": "sha256", 00:18:57.898 "dhgroup": "null" 00:18:57.898 } 00:18:57.898 } 00:18:57.898 ]' 00:18:57.898 21:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.898 21:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.898 21:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.898 21:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:57.898 21:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.898 21:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.898 21:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.898 21:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.156 21:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:18:58.156 21:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:18:59.092 21:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.092 21:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.092 21:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.092 21:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.092 21:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.092 21:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.092 21:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:59.092 21:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:59.351 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:59.351 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.351 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:59.351 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:59.351 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:59.351 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.351 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.351 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.351 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.351 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.351 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.351 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.351 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.920 00:18:59.920 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.920 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.920 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.179 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.179 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.179 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.179 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.179 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.179 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.179 { 00:19:00.179 "cntlid": 3, 00:19:00.179 "qid": 0, 00:19:00.179 "state": "enabled", 00:19:00.179 "thread": "nvmf_tgt_poll_group_000", 00:19:00.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:00.179 "listen_address": { 00:19:00.179 "trtype": "TCP", 00:19:00.179 "adrfam": "IPv4", 00:19:00.179 "traddr": "10.0.0.2", 00:19:00.179 "trsvcid": "4420" 00:19:00.179 }, 00:19:00.179 "peer_address": { 00:19:00.179 "trtype": "TCP", 00:19:00.179 "adrfam": "IPv4", 00:19:00.179 "traddr": "10.0.0.1", 00:19:00.179 "trsvcid": "58012" 00:19:00.179 }, 00:19:00.179 "auth": { 00:19:00.179 "state": "completed", 00:19:00.179 "digest": "sha256", 00:19:00.179 "dhgroup": "null" 00:19:00.179 } 00:19:00.179 } 00:19:00.179 ]' 00:19:00.179 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.179 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.179 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.179 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:00.179 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.179 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.179 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.179 21:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.439 21:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:19:00.439 21:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:19:01.373 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.631 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.631 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.631 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.631 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.631 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.631 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:01.631 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:01.890 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:01.890 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.890 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:01.890 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:01.890 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:01.890 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.890 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.890 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.890 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.890 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.890 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.890 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.890 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.148 00:19:02.148 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.148 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.148 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.406 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.406 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.406 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.406 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.406 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.406 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.406 { 00:19:02.406 "cntlid": 5, 00:19:02.406 "qid": 0, 00:19:02.406 "state": "enabled", 00:19:02.406 "thread": "nvmf_tgt_poll_group_000", 00:19:02.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:02.406 "listen_address": { 00:19:02.406 "trtype": "TCP", 00:19:02.406 "adrfam": "IPv4", 00:19:02.406 "traddr": "10.0.0.2", 00:19:02.406 "trsvcid": "4420" 00:19:02.406 }, 00:19:02.406 "peer_address": { 00:19:02.406 "trtype": "TCP", 00:19:02.406 "adrfam": "IPv4", 00:19:02.406 "traddr": "10.0.0.1", 00:19:02.406 "trsvcid": "58030" 00:19:02.406 }, 00:19:02.406 "auth": { 00:19:02.406 "state": "completed", 00:19:02.406 "digest": "sha256", 00:19:02.406 "dhgroup": "null" 00:19:02.406 } 00:19:02.406 } 00:19:02.406 ]' 00:19:02.406 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.406 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.406 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.406 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:02.406 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.406 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.407 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.407 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.974 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:19:02.974 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:19:03.909 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.909 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.910 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.910 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.910 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.910 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.910 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:03.910 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:04.168 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:04.168 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.168 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:04.168 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:04.168 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:04.168 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.168 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:04.168 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.168 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.168 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.168 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:04.168 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:04.168 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:04.426 00:19:04.426 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.426 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.426 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.684 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.684 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.684 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.684 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.684 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.684 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.684 { 00:19:04.684 "cntlid": 7, 00:19:04.684 "qid": 0, 00:19:04.684 "state": "enabled", 00:19:04.684 "thread": "nvmf_tgt_poll_group_000", 00:19:04.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:04.684 "listen_address": { 00:19:04.684 "trtype": "TCP", 00:19:04.684 "adrfam": "IPv4", 00:19:04.684 "traddr": "10.0.0.2", 00:19:04.684 "trsvcid": "4420" 00:19:04.684 }, 00:19:04.684 "peer_address": { 00:19:04.684 "trtype": "TCP", 00:19:04.684 "adrfam": "IPv4", 00:19:04.684 "traddr": "10.0.0.1", 00:19:04.684 "trsvcid": "58050" 00:19:04.684 }, 00:19:04.684 "auth": { 00:19:04.684 "state": "completed", 00:19:04.684 "digest": "sha256", 00:19:04.684 "dhgroup": "null" 00:19:04.684 } 00:19:04.684 } 00:19:04.684 ]' 00:19:04.684 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.684 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.684 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.684 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:04.684 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.684 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.684 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.684 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.943 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:19:04.943 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:19:05.880 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.138 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.138 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.138 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.138 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.138 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.138 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.138 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:06.138 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:06.396 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:06.396 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.396 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:06.396 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:06.396 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:06.396 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.396 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.396 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.396 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.396 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.396 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.396 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.396 21:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.655 00:19:06.655 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.655 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.655 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.913 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.913 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.913 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.913 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.913 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.913 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.913 { 00:19:06.913 "cntlid": 9, 00:19:06.913 "qid": 0, 00:19:06.913 "state": "enabled", 00:19:06.913 "thread": "nvmf_tgt_poll_group_000", 00:19:06.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:06.913 "listen_address": { 00:19:06.913 "trtype": "TCP", 00:19:06.913 "adrfam": "IPv4", 00:19:06.913 "traddr": "10.0.0.2", 00:19:06.913 "trsvcid": "4420" 00:19:06.913 }, 00:19:06.913 "peer_address": { 00:19:06.913 "trtype": "TCP", 00:19:06.913 "adrfam": "IPv4", 00:19:06.913 "traddr": "10.0.0.1", 00:19:06.913 "trsvcid": "51360" 00:19:06.913 }, 00:19:06.913 "auth": { 00:19:06.913 "state": "completed", 00:19:06.913 "digest": "sha256", 00:19:06.913 "dhgroup": "ffdhe2048" 00:19:06.913 } 00:19:06.913 } 00:19:06.913 ]' 00:19:06.913 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.913 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.913 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.913 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:06.913 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.172 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.172 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.172 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.429 21:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:19:07.429 21:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:19:08.364 21:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.365 21:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.365 21:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.365 21:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.365 21:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.365 21:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.365 21:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:08.365 21:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:08.622 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:08.622 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.622 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:08.622 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:08.622 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:08.622 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.622 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.622 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.622 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.622 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.622 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.622 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.622 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.880 00:19:08.880 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.880 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.880 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.138 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.138 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.138 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.138 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.138 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.138 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.138 { 00:19:09.138 "cntlid": 11, 00:19:09.138 "qid": 0, 00:19:09.138 "state": "enabled", 00:19:09.138 "thread": "nvmf_tgt_poll_group_000", 00:19:09.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:09.138 "listen_address": { 00:19:09.138 "trtype": "TCP", 00:19:09.138 "adrfam": "IPv4", 00:19:09.138 "traddr": "10.0.0.2", 00:19:09.138 "trsvcid": "4420" 00:19:09.138 }, 00:19:09.138 "peer_address": { 00:19:09.138 "trtype": "TCP", 00:19:09.138 "adrfam": "IPv4", 00:19:09.138 "traddr": "10.0.0.1", 00:19:09.138 "trsvcid": "51394" 00:19:09.138 }, 00:19:09.138 "auth": { 00:19:09.138 "state": "completed", 00:19:09.138 "digest": "sha256", 00:19:09.138 "dhgroup": "ffdhe2048" 00:19:09.138 } 00:19:09.138 } 00:19:09.138 ]' 00:19:09.138 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.396 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.397 21:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.397 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:09.397 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.397 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.397 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.397 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.654 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:19:09.654 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:19:10.589 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.589 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.589 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.589 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.589 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.589 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.589 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.589 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.847 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:10.847 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.847 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:10.848 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:10.848 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:10.848 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.848 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.848 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.848 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.848 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.848 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.848 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.848 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.412 00:19:11.412 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.412 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.412 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.670 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.670 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.670 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.670 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.670 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.670 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.670 { 00:19:11.670 "cntlid": 13, 00:19:11.670 "qid": 0, 00:19:11.671 "state": "enabled", 00:19:11.671 "thread": "nvmf_tgt_poll_group_000", 00:19:11.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:11.671 "listen_address": { 00:19:11.671 "trtype": "TCP", 00:19:11.671 "adrfam": "IPv4", 00:19:11.671 "traddr": "10.0.0.2", 00:19:11.671 "trsvcid": "4420" 00:19:11.671 }, 00:19:11.671 "peer_address": { 00:19:11.671 "trtype": "TCP", 00:19:11.671 "adrfam": "IPv4", 00:19:11.671 "traddr": "10.0.0.1", 00:19:11.671 "trsvcid": "51430" 00:19:11.671 }, 00:19:11.671 "auth": { 00:19:11.671 "state": "completed", 00:19:11.671 "digest": "sha256", 00:19:11.671 "dhgroup": "ffdhe2048" 00:19:11.671 } 00:19:11.671 } 00:19:11.671 ]' 00:19:11.671 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.671 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.671 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.671 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:11.671 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.671 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.671 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.671 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.930 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:19:11.930 21:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:19:12.861 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.119 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.119 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.119 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.119 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.119 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.119 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.119 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.376 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:13.376 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.376 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:13.376 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:13.376 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:13.376 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.376 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:13.376 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.376 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.376 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.376 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:13.376 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.376 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.633 00:19:13.633 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.633 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.633 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.891 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.891 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.891 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.891 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.891 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.891 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.891 { 00:19:13.891 "cntlid": 15, 00:19:13.891 "qid": 0, 00:19:13.891 "state": "enabled", 00:19:13.891 "thread": "nvmf_tgt_poll_group_000", 00:19:13.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:13.891 "listen_address": { 00:19:13.891 "trtype": "TCP", 00:19:13.891 "adrfam": "IPv4", 00:19:13.891 "traddr": "10.0.0.2", 00:19:13.891 "trsvcid": "4420" 00:19:13.891 }, 00:19:13.891 "peer_address": { 00:19:13.891 "trtype": "TCP", 00:19:13.891 "adrfam": "IPv4", 00:19:13.891 "traddr": "10.0.0.1", 00:19:13.891 "trsvcid": "51440" 00:19:13.891 }, 00:19:13.891 "auth": { 00:19:13.891 "state": "completed", 00:19:13.891 "digest": "sha256", 00:19:13.891 "dhgroup": "ffdhe2048" 00:19:13.891 } 00:19:13.891 } 00:19:13.891 ]' 00:19:13.891 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.891 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.891 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.891 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.891 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.147 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.147 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.147 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.404 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:19:14.404 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:19:15.339 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.339 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:15.339 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.339 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.339 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.339 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.339 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.339 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:15.339 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:15.596 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:15.596 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.596 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:15.596 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:15.596 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:15.596 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.596 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.596 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.596 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.596 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.596 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.596 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.596 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.165 00:19:16.165 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.165 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.165 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.165 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.165 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.165 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.165 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.165 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.165 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.165 { 00:19:16.165 "cntlid": 17, 00:19:16.165 "qid": 0, 00:19:16.165 "state": "enabled", 00:19:16.165 "thread": "nvmf_tgt_poll_group_000", 00:19:16.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:16.165 "listen_address": { 00:19:16.165 "trtype": "TCP", 00:19:16.165 "adrfam": "IPv4", 00:19:16.165 "traddr": "10.0.0.2", 00:19:16.165 "trsvcid": "4420" 00:19:16.165 }, 00:19:16.165 "peer_address": { 00:19:16.165 "trtype": "TCP", 00:19:16.165 "adrfam": "IPv4", 00:19:16.165 "traddr": "10.0.0.1", 00:19:16.165 "trsvcid": "35498" 00:19:16.165 }, 00:19:16.165 "auth": { 00:19:16.165 "state": "completed", 00:19:16.165 "digest": "sha256", 00:19:16.165 "dhgroup": "ffdhe3072" 00:19:16.165 } 00:19:16.165 } 00:19:16.165 ]' 00:19:16.423 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.423 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.423 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.423 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:16.423 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.423 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.423 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.423 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.680 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:19:16.680 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:19:17.616 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.616 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.616 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.616 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.616 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.617 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.617 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:17.617 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:17.875 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:17.875 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.875 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:17.875 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:17.875 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:17.875 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.875 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.875 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.875 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.875 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.875 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.875 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.875 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.441 00:19:18.441 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.441 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.441 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.700 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.700 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.700 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.700 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.700 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.700 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.700 { 00:19:18.700 "cntlid": 19, 00:19:18.700 "qid": 0, 00:19:18.700 "state": "enabled", 00:19:18.700 "thread": "nvmf_tgt_poll_group_000", 00:19:18.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:18.700 "listen_address": { 00:19:18.700 "trtype": "TCP", 00:19:18.700 "adrfam": "IPv4", 00:19:18.700 "traddr": "10.0.0.2", 00:19:18.700 "trsvcid": "4420" 00:19:18.700 }, 00:19:18.700 "peer_address": { 00:19:18.700 "trtype": "TCP", 00:19:18.700 "adrfam": "IPv4", 00:19:18.700 "traddr": "10.0.0.1", 00:19:18.700 "trsvcid": "35534" 00:19:18.700 }, 00:19:18.700 "auth": { 00:19:18.700 "state": "completed", 00:19:18.700 "digest": "sha256", 00:19:18.700 "dhgroup": "ffdhe3072" 00:19:18.700 } 00:19:18.700 } 00:19:18.700 ]' 00:19:18.700 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.700 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.700 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.700 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:18.700 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.700 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.700 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.700 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.959 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:19:19.218 21:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:19:20.155 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.155 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.155 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.155 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.155 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.155 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.155 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:20.155 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:20.413 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:20.413 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.413 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:20.413 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:20.413 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:20.413 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.413 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.413 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.413 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.413 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.413 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.413 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.413 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.671 00:19:20.671 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.671 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.671 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.929 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.929 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.929 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.929 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.929 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.929 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.929 { 00:19:20.929 "cntlid": 21, 00:19:20.929 "qid": 0, 00:19:20.929 "state": "enabled", 00:19:20.929 "thread": "nvmf_tgt_poll_group_000", 00:19:20.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:20.929 "listen_address": { 00:19:20.929 "trtype": "TCP", 00:19:20.929 "adrfam": "IPv4", 00:19:20.929 "traddr": "10.0.0.2", 00:19:20.929 "trsvcid": "4420" 00:19:20.929 }, 00:19:20.929 "peer_address": { 00:19:20.929 "trtype": "TCP", 00:19:20.929 "adrfam": "IPv4", 00:19:20.929 "traddr": "10.0.0.1", 00:19:20.929 "trsvcid": "35548" 00:19:20.929 }, 00:19:20.929 "auth": { 00:19:20.929 "state": "completed", 00:19:20.929 "digest": "sha256", 00:19:20.929 "dhgroup": "ffdhe3072" 00:19:20.929 } 00:19:20.929 } 00:19:20.929 ]' 00:19:20.929 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.187 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.187 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.187 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:21.187 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.187 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.187 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.187 21:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.445 21:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:19:21.445 21:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:19:22.384 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.384 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.384 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.384 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.384 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.384 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.384 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:22.384 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:22.642 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:22.642 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.642 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:22.642 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:22.642 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:22.642 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.642 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:22.642 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.642 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.642 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.642 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:22.642 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.642 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:23.210 00:19:23.210 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.210 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.210 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.469 21:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.469 21:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.469 21:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.469 21:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.469 21:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.469 21:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.469 { 00:19:23.469 "cntlid": 23, 00:19:23.469 "qid": 0, 00:19:23.469 "state": "enabled", 00:19:23.469 "thread": "nvmf_tgt_poll_group_000", 00:19:23.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:23.469 "listen_address": { 00:19:23.469 "trtype": "TCP", 00:19:23.469 "adrfam": "IPv4", 00:19:23.469 "traddr": "10.0.0.2", 00:19:23.469 "trsvcid": "4420" 00:19:23.469 }, 00:19:23.469 "peer_address": { 00:19:23.469 "trtype": "TCP", 00:19:23.469 "adrfam": "IPv4", 00:19:23.469 "traddr": "10.0.0.1", 00:19:23.469 "trsvcid": "35562" 00:19:23.469 }, 00:19:23.469 "auth": { 00:19:23.469 "state": "completed", 00:19:23.469 "digest": "sha256", 00:19:23.469 "dhgroup": "ffdhe3072" 00:19:23.469 } 00:19:23.469 } 00:19:23.469 ]' 00:19:23.469 21:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.469 21:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.469 21:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.469 21:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:23.469 21:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.469 21:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.469 21:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.469 21:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.729 21:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:19:23.729 21:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:19:24.715 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.715 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.715 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.715 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.715 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.715 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.715 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.715 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:24.715 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:24.973 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:24.973 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.973 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:24.973 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:24.973 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:24.973 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.973 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.973 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.973 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.973 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.973 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.973 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.973 21:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.538 00:19:25.538 21:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.538 21:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.538 21:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.796 21:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.796 21:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.796 21:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.796 21:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.796 21:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.796 21:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.796 { 00:19:25.796 "cntlid": 25, 00:19:25.796 "qid": 0, 00:19:25.796 "state": "enabled", 00:19:25.796 "thread": "nvmf_tgt_poll_group_000", 00:19:25.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:25.796 "listen_address": { 00:19:25.796 "trtype": "TCP", 00:19:25.796 "adrfam": "IPv4", 00:19:25.796 "traddr": "10.0.0.2", 00:19:25.796 "trsvcid": "4420" 00:19:25.796 }, 00:19:25.796 "peer_address": { 00:19:25.796 "trtype": "TCP", 00:19:25.796 "adrfam": "IPv4", 00:19:25.796 "traddr": "10.0.0.1", 00:19:25.796 "trsvcid": "35586" 00:19:25.796 }, 00:19:25.796 "auth": { 00:19:25.796 "state": "completed", 00:19:25.796 "digest": "sha256", 00:19:25.796 "dhgroup": "ffdhe4096" 00:19:25.796 } 00:19:25.796 } 00:19:25.796 ]' 00:19:25.796 21:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.796 21:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.796 21:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.796 21:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:25.796 21:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.796 21:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.796 21:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.796 21:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.055 21:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:19:26.055 21:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:19:27.430 21:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.430 21:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.430 21:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.430 21:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.430 21:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.430 21:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.430 21:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:27.430 21:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:27.430 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:27.430 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.430 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:27.430 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:27.430 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:27.430 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.430 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.430 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.430 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.430 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.430 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.431 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.431 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.997 00:19:27.997 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.997 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.997 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.256 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.257 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.257 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.257 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.257 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.257 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.257 { 00:19:28.257 "cntlid": 27, 00:19:28.257 "qid": 0, 00:19:28.257 "state": "enabled", 00:19:28.257 "thread": "nvmf_tgt_poll_group_000", 00:19:28.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:28.257 "listen_address": { 00:19:28.257 "trtype": "TCP", 00:19:28.257 "adrfam": "IPv4", 00:19:28.257 "traddr": "10.0.0.2", 00:19:28.257 "trsvcid": "4420" 00:19:28.257 }, 00:19:28.257 "peer_address": { 00:19:28.257 "trtype": "TCP", 00:19:28.257 "adrfam": "IPv4", 00:19:28.257 "traddr": "10.0.0.1", 00:19:28.257 "trsvcid": "40992" 00:19:28.257 }, 00:19:28.257 "auth": { 00:19:28.257 "state": "completed", 00:19:28.257 "digest": "sha256", 00:19:28.257 "dhgroup": "ffdhe4096" 00:19:28.257 } 00:19:28.257 } 00:19:28.257 ]' 00:19:28.257 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.257 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.257 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.257 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:28.257 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.257 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.257 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.257 21:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.515 21:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:19:28.515 21:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:19:29.449 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.450 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.450 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.450 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.450 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.450 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.450 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:29.450 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:30.017 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:30.017 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.017 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:30.017 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:30.017 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:30.017 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.017 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.017 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.017 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.017 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.017 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.017 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.017 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.276 00:19:30.276 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.276 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.276 21:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.534 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.534 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.534 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.534 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.534 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.534 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.534 { 00:19:30.534 "cntlid": 29, 00:19:30.534 "qid": 0, 00:19:30.534 "state": "enabled", 00:19:30.534 "thread": "nvmf_tgt_poll_group_000", 00:19:30.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:30.534 "listen_address": { 00:19:30.534 "trtype": "TCP", 00:19:30.534 "adrfam": "IPv4", 00:19:30.534 "traddr": "10.0.0.2", 00:19:30.534 "trsvcid": "4420" 00:19:30.534 }, 00:19:30.534 "peer_address": { 00:19:30.534 "trtype": "TCP", 00:19:30.534 "adrfam": "IPv4", 00:19:30.534 "traddr": "10.0.0.1", 00:19:30.534 "trsvcid": "41014" 00:19:30.534 }, 00:19:30.534 "auth": { 00:19:30.534 "state": "completed", 00:19:30.534 "digest": "sha256", 00:19:30.534 "dhgroup": "ffdhe4096" 00:19:30.534 } 00:19:30.534 } 00:19:30.534 ]' 00:19:30.534 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.534 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.534 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.534 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:30.534 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.534 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.534 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.534 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.102 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:19:31.102 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:19:32.036 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.036 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.036 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.036 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.036 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.036 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.036 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:32.036 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:32.294 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:32.294 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.294 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:32.294 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:32.294 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:32.294 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.294 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:32.294 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.294 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.294 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.294 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:32.294 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.294 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.552 00:19:32.553 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.553 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.553 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.812 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.812 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.812 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.812 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.812 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.812 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.812 { 00:19:32.812 "cntlid": 31, 00:19:32.812 "qid": 0, 00:19:32.812 "state": "enabled", 00:19:32.812 "thread": "nvmf_tgt_poll_group_000", 00:19:32.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:32.812 "listen_address": { 00:19:32.812 "trtype": "TCP", 00:19:32.812 "adrfam": "IPv4", 00:19:32.812 "traddr": "10.0.0.2", 00:19:32.812 "trsvcid": "4420" 00:19:32.812 }, 00:19:32.812 "peer_address": { 00:19:32.812 "trtype": "TCP", 00:19:32.812 "adrfam": "IPv4", 00:19:32.812 "traddr": "10.0.0.1", 00:19:32.812 "trsvcid": "41044" 00:19:32.812 }, 00:19:32.812 "auth": { 00:19:32.812 "state": "completed", 00:19:32.812 "digest": "sha256", 00:19:32.812 "dhgroup": "ffdhe4096" 00:19:32.812 } 00:19:32.812 } 00:19:32.812 ]' 00:19:32.812 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.071 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.071 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.071 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:33.071 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.071 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.071 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.071 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.329 21:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:19:33.329 21:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:19:34.265 21:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.265 21:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.265 21:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.265 21:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.265 21:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.265 21:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.265 21:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.265 21:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:34.265 21:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:34.524 21:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:34.524 21:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.524 21:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:34.524 21:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:34.524 21:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:34.524 21:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.524 21:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.524 21:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.524 21:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.524 21:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.524 21:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.524 21:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.524 21:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.089 00:19:35.089 21:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.089 21:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.089 21:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.346 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.346 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.346 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.346 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.346 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.346 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.346 { 00:19:35.346 "cntlid": 33, 00:19:35.346 "qid": 0, 00:19:35.346 "state": "enabled", 00:19:35.346 "thread": "nvmf_tgt_poll_group_000", 00:19:35.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:35.346 "listen_address": { 00:19:35.346 "trtype": "TCP", 00:19:35.346 "adrfam": "IPv4", 00:19:35.346 "traddr": "10.0.0.2", 00:19:35.346 "trsvcid": "4420" 00:19:35.346 }, 00:19:35.346 "peer_address": { 00:19:35.346 "trtype": "TCP", 00:19:35.346 "adrfam": "IPv4", 00:19:35.346 "traddr": "10.0.0.1", 00:19:35.346 "trsvcid": "41058" 00:19:35.346 }, 00:19:35.346 "auth": { 00:19:35.346 "state": "completed", 00:19:35.346 "digest": "sha256", 00:19:35.346 "dhgroup": "ffdhe6144" 00:19:35.346 } 00:19:35.346 } 00:19:35.346 ]' 00:19:35.346 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.606 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.606 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.606 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.606 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.606 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.606 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.606 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.865 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:19:35.865 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:19:36.797 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.798 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.798 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.798 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.798 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.798 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.798 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:36.798 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:37.056 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:37.056 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.056 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.056 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:37.056 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:37.056 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.056 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.056 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.056 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.056 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.056 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.056 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.056 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.622 00:19:37.622 21:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.622 21:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.622 21:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.880 21:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.880 21:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.880 21:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.880 21:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.880 21:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.880 21:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.880 { 00:19:37.880 "cntlid": 35, 00:19:37.880 "qid": 0, 00:19:37.880 "state": "enabled", 00:19:37.880 "thread": "nvmf_tgt_poll_group_000", 00:19:37.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:37.880 "listen_address": { 00:19:37.880 "trtype": "TCP", 00:19:37.880 "adrfam": "IPv4", 00:19:37.880 "traddr": "10.0.0.2", 00:19:37.880 "trsvcid": "4420" 00:19:37.880 }, 00:19:37.880 "peer_address": { 00:19:37.880 "trtype": "TCP", 00:19:37.880 "adrfam": "IPv4", 00:19:37.880 "traddr": "10.0.0.1", 00:19:37.880 "trsvcid": "59088" 00:19:37.880 }, 00:19:37.880 "auth": { 00:19:37.880 "state": "completed", 00:19:37.880 "digest": "sha256", 00:19:37.880 "dhgroup": "ffdhe6144" 00:19:37.880 } 00:19:37.880 } 00:19:37.880 ]' 00:19:37.881 21:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.138 21:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.138 21:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.138 21:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:38.138 21:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.138 21:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.138 21:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.138 21:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.397 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:19:38.397 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:19:39.331 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.331 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.331 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.331 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.331 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.331 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.331 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:39.331 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:39.590 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:39.590 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.590 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:39.590 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:39.590 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:39.590 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.590 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.590 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.590 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.590 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.590 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.590 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.590 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.156 00:19:40.156 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.156 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.156 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.415 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.415 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.415 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.415 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.415 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.415 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.415 { 00:19:40.415 "cntlid": 37, 00:19:40.415 "qid": 0, 00:19:40.415 "state": "enabled", 00:19:40.415 "thread": "nvmf_tgt_poll_group_000", 00:19:40.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:40.415 "listen_address": { 00:19:40.415 "trtype": "TCP", 00:19:40.415 "adrfam": "IPv4", 00:19:40.415 "traddr": "10.0.0.2", 00:19:40.415 "trsvcid": "4420" 00:19:40.415 }, 00:19:40.415 "peer_address": { 00:19:40.415 "trtype": "TCP", 00:19:40.415 "adrfam": "IPv4", 00:19:40.415 "traddr": "10.0.0.1", 00:19:40.415 "trsvcid": "59116" 00:19:40.415 }, 00:19:40.415 "auth": { 00:19:40.415 "state": "completed", 00:19:40.415 "digest": "sha256", 00:19:40.415 "dhgroup": "ffdhe6144" 00:19:40.415 } 00:19:40.415 } 00:19:40.415 ]' 00:19:40.415 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.415 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.415 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.673 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:40.673 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.673 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.673 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.673 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.931 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:19:40.931 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:19:41.864 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.864 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.864 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.864 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.864 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.864 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.864 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:41.864 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:42.123 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:42.123 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.123 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:42.123 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:42.123 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:42.123 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.123 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:42.123 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.123 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.123 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.123 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:42.123 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:42.123 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:43.056 00:19:43.056 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.056 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.056 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.056 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.056 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.056 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.056 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.056 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.056 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.056 { 00:19:43.056 "cntlid": 39, 00:19:43.056 "qid": 0, 00:19:43.056 "state": "enabled", 00:19:43.056 "thread": "nvmf_tgt_poll_group_000", 00:19:43.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:43.056 "listen_address": { 00:19:43.056 "trtype": "TCP", 00:19:43.056 "adrfam": "IPv4", 00:19:43.056 "traddr": "10.0.0.2", 00:19:43.056 "trsvcid": "4420" 00:19:43.056 }, 00:19:43.056 "peer_address": { 00:19:43.056 "trtype": "TCP", 00:19:43.056 "adrfam": "IPv4", 00:19:43.056 "traddr": "10.0.0.1", 00:19:43.056 "trsvcid": "59148" 00:19:43.056 }, 00:19:43.056 "auth": { 00:19:43.056 "state": "completed", 00:19:43.056 "digest": "sha256", 00:19:43.056 "dhgroup": "ffdhe6144" 00:19:43.056 } 00:19:43.056 } 00:19:43.056 ]' 00:19:43.056 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.314 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.314 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.314 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:43.314 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.314 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.314 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.314 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.571 21:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:19:43.571 21:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:19:44.503 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.503 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.503 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.503 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.503 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.503 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.503 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.503 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:44.503 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:44.761 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:44.761 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.761 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.761 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:44.761 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:44.761 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.761 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.761 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.761 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.761 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.761 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.761 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.761 21:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.695 00:19:45.695 21:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.695 21:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.695 21:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.952 21:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.953 21:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.953 21:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.953 21:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.953 21:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.953 21:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.953 { 00:19:45.953 "cntlid": 41, 00:19:45.953 "qid": 0, 00:19:45.953 "state": "enabled", 00:19:45.953 "thread": "nvmf_tgt_poll_group_000", 00:19:45.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:45.953 "listen_address": { 00:19:45.953 "trtype": "TCP", 00:19:45.953 "adrfam": "IPv4", 00:19:45.953 "traddr": "10.0.0.2", 00:19:45.953 "trsvcid": "4420" 00:19:45.953 }, 00:19:45.953 "peer_address": { 00:19:45.953 "trtype": "TCP", 00:19:45.953 "adrfam": "IPv4", 00:19:45.953 "traddr": "10.0.0.1", 00:19:45.953 "trsvcid": "59178" 00:19:45.953 }, 00:19:45.953 "auth": { 00:19:45.953 "state": "completed", 00:19:45.953 "digest": "sha256", 00:19:45.953 "dhgroup": "ffdhe8192" 00:19:45.953 } 00:19:45.953 } 00:19:45.953 ]' 00:19:45.953 21:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.953 21:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.953 21:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.953 21:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:45.953 21:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.953 21:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.953 21:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.953 21:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.519 21:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:19:46.519 21:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:19:47.453 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.453 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.453 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.453 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.453 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.453 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.453 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:47.453 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:47.711 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:47.711 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.711 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:47.711 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:47.711 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:47.711 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.711 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.711 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.711 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.711 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.711 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.711 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.711 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.644 00:19:48.644 21:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.644 21:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.644 21:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.903 21:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.903 21:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.903 21:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.903 21:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.903 21:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.903 21:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.903 { 00:19:48.903 "cntlid": 43, 00:19:48.903 "qid": 0, 00:19:48.903 "state": "enabled", 00:19:48.903 "thread": "nvmf_tgt_poll_group_000", 00:19:48.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:48.903 "listen_address": { 00:19:48.903 "trtype": "TCP", 00:19:48.903 "adrfam": "IPv4", 00:19:48.903 "traddr": "10.0.0.2", 00:19:48.903 "trsvcid": "4420" 00:19:48.903 }, 00:19:48.903 "peer_address": { 00:19:48.903 "trtype": "TCP", 00:19:48.903 "adrfam": "IPv4", 00:19:48.903 "traddr": "10.0.0.1", 00:19:48.903 "trsvcid": "54158" 00:19:48.903 }, 00:19:48.903 "auth": { 00:19:48.903 "state": "completed", 00:19:48.903 "digest": "sha256", 00:19:48.903 "dhgroup": "ffdhe8192" 00:19:48.903 } 00:19:48.903 } 00:19:48.903 ]' 00:19:48.903 21:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.903 21:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.903 21:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.903 21:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:48.903 21:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.903 21:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.903 21:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.903 21:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.469 21:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:19:49.469 21:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:19:50.403 21:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.403 21:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.403 21:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.403 21:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.403 21:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.403 21:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.403 21:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:50.403 21:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:50.661 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:50.661 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.661 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.661 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:50.661 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:50.661 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.661 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.661 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.661 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.661 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.661 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.661 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.661 21:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.595 00:19:51.595 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.595 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.595 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.853 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.853 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.853 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.853 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.853 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.853 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.853 { 00:19:51.853 "cntlid": 45, 00:19:51.853 "qid": 0, 00:19:51.853 "state": "enabled", 00:19:51.853 "thread": "nvmf_tgt_poll_group_000", 00:19:51.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:51.853 "listen_address": { 00:19:51.853 "trtype": "TCP", 00:19:51.853 "adrfam": "IPv4", 00:19:51.853 "traddr": "10.0.0.2", 00:19:51.853 "trsvcid": "4420" 00:19:51.853 }, 00:19:51.853 "peer_address": { 00:19:51.853 "trtype": "TCP", 00:19:51.853 "adrfam": "IPv4", 00:19:51.853 "traddr": "10.0.0.1", 00:19:51.853 "trsvcid": "54204" 00:19:51.853 }, 00:19:51.853 "auth": { 00:19:51.853 "state": "completed", 00:19:51.853 "digest": "sha256", 00:19:51.853 "dhgroup": "ffdhe8192" 00:19:51.853 } 00:19:51.853 } 00:19:51.853 ]' 00:19:51.853 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.853 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.853 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.853 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:51.853 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.853 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.853 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.853 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.110 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:19:52.110 21:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:19:53.043 21:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.043 21:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.043 21:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.043 21:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.043 21:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.043 21:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.043 21:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:53.043 21:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:53.301 21:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:53.301 21:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.301 21:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:53.301 21:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:53.301 21:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:53.301 21:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.301 21:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:53.301 21:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.301 21:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.559 21:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.559 21:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:53.559 21:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.559 21:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.493 00:19:54.493 21:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.493 21:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.493 21:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.493 21:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.493 21:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.493 21:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.493 21:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.493 21:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.493 21:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.493 { 00:19:54.493 "cntlid": 47, 00:19:54.493 "qid": 0, 00:19:54.493 "state": "enabled", 00:19:54.493 "thread": "nvmf_tgt_poll_group_000", 00:19:54.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:54.493 "listen_address": { 00:19:54.493 "trtype": "TCP", 00:19:54.493 "adrfam": "IPv4", 00:19:54.493 "traddr": "10.0.0.2", 00:19:54.493 "trsvcid": "4420" 00:19:54.493 }, 00:19:54.493 "peer_address": { 00:19:54.493 "trtype": "TCP", 00:19:54.493 "adrfam": "IPv4", 00:19:54.493 "traddr": "10.0.0.1", 00:19:54.493 "trsvcid": "54246" 00:19:54.493 }, 00:19:54.493 "auth": { 00:19:54.493 "state": "completed", 00:19:54.493 "digest": "sha256", 00:19:54.493 "dhgroup": "ffdhe8192" 00:19:54.493 } 00:19:54.493 } 00:19:54.493 ]' 00:19:54.493 21:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.493 21:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.493 21:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.751 21:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:54.751 21:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.751 21:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.751 21:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.751 21:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.023 21:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:19:55.023 21:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:19:55.981 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.981 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.981 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.981 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.981 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.981 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:55.981 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.981 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.981 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:55.981 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:56.240 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:56.240 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.240 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:56.240 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:56.240 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:56.240 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.240 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.240 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.240 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.240 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.240 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.240 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.240 21:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.498 00:19:56.498 21:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.498 21:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.498 21:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.063 21:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.063 21:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.063 21:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.063 21:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.063 21:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.063 21:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.063 { 00:19:57.063 "cntlid": 49, 00:19:57.063 "qid": 0, 00:19:57.063 "state": "enabled", 00:19:57.063 "thread": "nvmf_tgt_poll_group_000", 00:19:57.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:57.063 "listen_address": { 00:19:57.063 "trtype": "TCP", 00:19:57.063 "adrfam": "IPv4", 00:19:57.063 "traddr": "10.0.0.2", 00:19:57.063 "trsvcid": "4420" 00:19:57.063 }, 00:19:57.063 "peer_address": { 00:19:57.063 "trtype": "TCP", 00:19:57.063 "adrfam": "IPv4", 00:19:57.063 "traddr": "10.0.0.1", 00:19:57.063 "trsvcid": "39100" 00:19:57.063 }, 00:19:57.063 "auth": { 00:19:57.063 "state": "completed", 00:19:57.063 "digest": "sha384", 00:19:57.063 "dhgroup": "null" 00:19:57.063 } 00:19:57.063 } 00:19:57.063 ]' 00:19:57.063 21:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.063 21:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.063 21:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.063 21:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:57.063 21:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.063 21:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.063 21:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.063 21:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.321 21:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:19:57.321 21:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:19:58.262 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.262 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.262 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.262 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.262 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.262 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.262 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:58.262 21:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:58.520 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:58.521 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.521 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:58.521 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:58.521 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:58.521 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.521 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.521 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.521 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.521 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.521 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.521 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.521 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.778 00:19:58.778 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.778 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.778 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.036 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.036 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.036 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.036 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.036 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.036 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.036 { 00:19:59.036 "cntlid": 51, 00:19:59.036 "qid": 0, 00:19:59.036 "state": "enabled", 00:19:59.036 "thread": "nvmf_tgt_poll_group_000", 00:19:59.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:59.036 "listen_address": { 00:19:59.036 "trtype": "TCP", 00:19:59.036 "adrfam": "IPv4", 00:19:59.036 "traddr": "10.0.0.2", 00:19:59.036 "trsvcid": "4420" 00:19:59.036 }, 00:19:59.036 "peer_address": { 00:19:59.036 "trtype": "TCP", 00:19:59.036 "adrfam": "IPv4", 00:19:59.036 "traddr": "10.0.0.1", 00:19:59.036 "trsvcid": "39126" 00:19:59.036 }, 00:19:59.036 "auth": { 00:19:59.036 "state": "completed", 00:19:59.036 "digest": "sha384", 00:19:59.036 "dhgroup": "null" 00:19:59.036 } 00:19:59.036 } 00:19:59.036 ]' 00:19:59.036 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.294 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:59.294 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.294 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:59.294 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.294 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.294 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.294 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.552 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:19:59.552 21:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:20:00.486 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.486 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.486 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.486 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.486 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.486 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.486 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:00.486 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:00.744 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:00.744 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.744 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:00.744 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:00.744 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:00.744 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.744 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.744 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.744 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.744 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.744 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.744 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.744 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.310 00:20:01.310 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.310 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.310 21:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.568 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.568 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.568 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.568 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.568 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.568 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.568 { 00:20:01.568 "cntlid": 53, 00:20:01.568 "qid": 0, 00:20:01.568 "state": "enabled", 00:20:01.568 "thread": "nvmf_tgt_poll_group_000", 00:20:01.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:01.568 "listen_address": { 00:20:01.568 "trtype": "TCP", 00:20:01.568 "adrfam": "IPv4", 00:20:01.568 "traddr": "10.0.0.2", 00:20:01.568 "trsvcid": "4420" 00:20:01.568 }, 00:20:01.568 "peer_address": { 00:20:01.568 "trtype": "TCP", 00:20:01.568 "adrfam": "IPv4", 00:20:01.568 "traddr": "10.0.0.1", 00:20:01.568 "trsvcid": "39142" 00:20:01.568 }, 00:20:01.568 "auth": { 00:20:01.568 "state": "completed", 00:20:01.568 "digest": "sha384", 00:20:01.568 "dhgroup": "null" 00:20:01.568 } 00:20:01.568 } 00:20:01.568 ]' 00:20:01.568 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.568 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.568 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.568 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:01.568 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.568 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.568 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.568 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.826 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:20:01.827 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:20:02.761 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.019 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.019 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.019 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.019 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.019 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.019 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:03.019 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:03.277 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:03.277 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.277 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:03.277 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:03.277 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:03.277 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.277 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:03.277 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.277 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.277 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.277 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:03.277 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:03.277 21:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:03.535 00:20:03.535 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.535 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.535 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.793 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.793 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.793 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.793 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.793 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.793 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.793 { 00:20:03.793 "cntlid": 55, 00:20:03.793 "qid": 0, 00:20:03.793 "state": "enabled", 00:20:03.793 "thread": "nvmf_tgt_poll_group_000", 00:20:03.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:03.793 "listen_address": { 00:20:03.793 "trtype": "TCP", 00:20:03.793 "adrfam": "IPv4", 00:20:03.793 "traddr": "10.0.0.2", 00:20:03.793 "trsvcid": "4420" 00:20:03.793 }, 00:20:03.793 "peer_address": { 00:20:03.793 "trtype": "TCP", 00:20:03.793 "adrfam": "IPv4", 00:20:03.793 "traddr": "10.0.0.1", 00:20:03.793 "trsvcid": "39168" 00:20:03.793 }, 00:20:03.793 "auth": { 00:20:03.793 "state": "completed", 00:20:03.793 "digest": "sha384", 00:20:03.793 "dhgroup": "null" 00:20:03.793 } 00:20:03.793 } 00:20:03.793 ]' 00:20:03.793 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.793 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.793 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.793 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:03.793 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.051 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.051 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.051 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.309 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:20:04.309 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:20:05.243 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.243 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.243 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.243 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.243 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.243 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.243 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.243 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:05.243 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:05.502 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:05.502 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.502 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:05.502 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:05.502 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:05.502 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.502 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.502 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.502 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.502 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.502 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.502 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.502 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.760 00:20:06.018 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.018 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.018 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.276 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.276 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.276 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.276 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.276 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.276 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.276 { 00:20:06.276 "cntlid": 57, 00:20:06.276 "qid": 0, 00:20:06.276 "state": "enabled", 00:20:06.276 "thread": "nvmf_tgt_poll_group_000", 00:20:06.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:06.276 "listen_address": { 00:20:06.276 "trtype": "TCP", 00:20:06.276 "adrfam": "IPv4", 00:20:06.276 "traddr": "10.0.0.2", 00:20:06.276 "trsvcid": "4420" 00:20:06.276 }, 00:20:06.276 "peer_address": { 00:20:06.276 "trtype": "TCP", 00:20:06.276 "adrfam": "IPv4", 00:20:06.276 "traddr": "10.0.0.1", 00:20:06.276 "trsvcid": "34014" 00:20:06.276 }, 00:20:06.276 "auth": { 00:20:06.276 "state": "completed", 00:20:06.276 "digest": "sha384", 00:20:06.276 "dhgroup": "ffdhe2048" 00:20:06.276 } 00:20:06.276 } 00:20:06.276 ]' 00:20:06.276 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.276 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.276 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.276 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:06.276 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.276 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.276 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.276 21:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.535 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:20:06.535 21:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:20:07.468 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.468 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.468 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.468 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.726 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.726 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.726 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.726 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.984 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:07.984 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.984 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:07.984 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:07.984 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:07.984 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.985 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.985 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.985 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.985 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.985 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.985 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.985 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.242 00:20:08.242 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.242 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.242 21:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.501 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.501 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.501 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.501 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.501 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.501 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.501 { 00:20:08.501 "cntlid": 59, 00:20:08.501 "qid": 0, 00:20:08.501 "state": "enabled", 00:20:08.501 "thread": "nvmf_tgt_poll_group_000", 00:20:08.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:08.501 "listen_address": { 00:20:08.501 "trtype": "TCP", 00:20:08.501 "adrfam": "IPv4", 00:20:08.501 "traddr": "10.0.0.2", 00:20:08.501 "trsvcid": "4420" 00:20:08.501 }, 00:20:08.501 "peer_address": { 00:20:08.501 "trtype": "TCP", 00:20:08.501 "adrfam": "IPv4", 00:20:08.501 "traddr": "10.0.0.1", 00:20:08.501 "trsvcid": "34038" 00:20:08.501 }, 00:20:08.501 "auth": { 00:20:08.501 "state": "completed", 00:20:08.501 "digest": "sha384", 00:20:08.501 "dhgroup": "ffdhe2048" 00:20:08.501 } 00:20:08.501 } 00:20:08.501 ]' 00:20:08.501 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.501 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.501 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.759 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:08.759 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.759 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.759 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.759 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.017 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:20:09.017 21:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:20:09.950 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.950 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.950 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.950 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.950 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.950 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.950 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:09.950 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:10.208 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:10.208 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.208 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:10.208 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:10.208 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:10.208 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.208 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.208 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.208 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.208 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.208 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.208 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.208 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.774 00:20:10.774 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.774 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.774 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.774 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.774 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.774 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.774 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.774 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.774 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.774 { 00:20:10.774 "cntlid": 61, 00:20:10.774 "qid": 0, 00:20:10.774 "state": "enabled", 00:20:10.774 "thread": "nvmf_tgt_poll_group_000", 00:20:10.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:10.774 "listen_address": { 00:20:10.774 "trtype": "TCP", 00:20:10.774 "adrfam": "IPv4", 00:20:10.774 "traddr": "10.0.0.2", 00:20:10.774 "trsvcid": "4420" 00:20:10.774 }, 00:20:10.774 "peer_address": { 00:20:10.774 "trtype": "TCP", 00:20:10.774 "adrfam": "IPv4", 00:20:10.774 "traddr": "10.0.0.1", 00:20:10.774 "trsvcid": "34050" 00:20:10.774 }, 00:20:10.774 "auth": { 00:20:10.774 "state": "completed", 00:20:10.774 "digest": "sha384", 00:20:10.774 "dhgroup": "ffdhe2048" 00:20:10.774 } 00:20:10.774 } 00:20:10.774 ]' 00:20:10.774 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.032 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.032 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.032 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:11.032 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.032 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.032 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.032 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.290 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:20:11.290 21:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:20:12.224 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.224 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.224 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.224 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.224 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.224 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.224 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:12.224 21:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:12.483 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:12.483 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.483 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:12.483 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:12.483 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:12.483 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.483 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:12.483 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.483 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.741 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.741 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:12.741 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.741 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.999 00:20:12.999 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.999 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.999 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.258 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.258 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.258 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.258 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.258 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.258 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.258 { 00:20:13.258 "cntlid": 63, 00:20:13.258 "qid": 0, 00:20:13.258 "state": "enabled", 00:20:13.258 "thread": "nvmf_tgt_poll_group_000", 00:20:13.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:13.258 "listen_address": { 00:20:13.258 "trtype": "TCP", 00:20:13.258 "adrfam": "IPv4", 00:20:13.258 "traddr": "10.0.0.2", 00:20:13.258 "trsvcid": "4420" 00:20:13.258 }, 00:20:13.258 "peer_address": { 00:20:13.258 "trtype": "TCP", 00:20:13.258 "adrfam": "IPv4", 00:20:13.258 "traddr": "10.0.0.1", 00:20:13.258 "trsvcid": "34080" 00:20:13.258 }, 00:20:13.258 "auth": { 00:20:13.258 "state": "completed", 00:20:13.258 "digest": "sha384", 00:20:13.258 "dhgroup": "ffdhe2048" 00:20:13.258 } 00:20:13.258 } 00:20:13.258 ]' 00:20:13.258 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.258 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.258 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.258 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:13.258 21:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.258 21:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.258 21:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.258 21:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.824 21:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:20:13.824 21:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:20:14.759 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.759 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.759 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.759 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.759 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.759 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.759 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.759 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.759 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.759 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:14.759 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.759 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.759 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:14.759 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:14.760 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.760 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.760 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.760 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.018 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.018 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.018 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.018 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.276 00:20:15.276 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.276 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.276 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.534 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.534 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.534 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.534 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.534 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.534 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.534 { 00:20:15.534 "cntlid": 65, 00:20:15.534 "qid": 0, 00:20:15.534 "state": "enabled", 00:20:15.534 "thread": "nvmf_tgt_poll_group_000", 00:20:15.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:15.534 "listen_address": { 00:20:15.534 "trtype": "TCP", 00:20:15.534 "adrfam": "IPv4", 00:20:15.534 "traddr": "10.0.0.2", 00:20:15.534 "trsvcid": "4420" 00:20:15.534 }, 00:20:15.534 "peer_address": { 00:20:15.534 "trtype": "TCP", 00:20:15.534 "adrfam": "IPv4", 00:20:15.534 "traddr": "10.0.0.1", 00:20:15.534 "trsvcid": "34104" 00:20:15.534 }, 00:20:15.534 "auth": { 00:20:15.534 "state": "completed", 00:20:15.534 "digest": "sha384", 00:20:15.534 "dhgroup": "ffdhe3072" 00:20:15.534 } 00:20:15.534 } 00:20:15.534 ]' 00:20:15.534 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.534 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.534 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.534 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:15.534 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.793 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.793 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.793 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.050 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:20:16.051 21:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:20:16.985 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.985 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.985 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.985 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.985 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.985 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.985 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:16.985 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:17.243 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:17.243 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.243 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:17.243 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:17.243 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:17.243 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.243 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.243 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.243 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.243 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.243 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.243 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.243 21:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.502 00:20:17.502 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.502 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.502 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.761 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.761 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.761 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.761 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.761 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.761 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.761 { 00:20:17.761 "cntlid": 67, 00:20:17.761 "qid": 0, 00:20:17.761 "state": "enabled", 00:20:17.761 "thread": "nvmf_tgt_poll_group_000", 00:20:17.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:17.761 "listen_address": { 00:20:17.761 "trtype": "TCP", 00:20:17.761 "adrfam": "IPv4", 00:20:17.761 "traddr": "10.0.0.2", 00:20:17.761 "trsvcid": "4420" 00:20:17.761 }, 00:20:17.761 "peer_address": { 00:20:17.761 "trtype": "TCP", 00:20:17.761 "adrfam": "IPv4", 00:20:17.761 "traddr": "10.0.0.1", 00:20:17.761 "trsvcid": "33862" 00:20:17.761 }, 00:20:17.761 "auth": { 00:20:17.761 "state": "completed", 00:20:17.761 "digest": "sha384", 00:20:17.761 "dhgroup": "ffdhe3072" 00:20:17.761 } 00:20:17.761 } 00:20:17.761 ]' 00:20:17.761 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.020 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.020 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.020 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:18.020 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.020 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.020 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.020 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.277 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:20:18.277 21:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:20:19.209 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.209 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.209 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.209 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.209 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.209 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.209 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:19.209 21:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:19.467 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:19.467 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.467 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.467 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:19.467 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:19.467 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.467 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.467 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.467 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.467 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.467 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.467 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.467 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.032 00:20:20.032 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.032 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.032 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.290 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.290 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.290 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.290 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.290 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.290 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.290 { 00:20:20.290 "cntlid": 69, 00:20:20.290 "qid": 0, 00:20:20.290 "state": "enabled", 00:20:20.290 "thread": "nvmf_tgt_poll_group_000", 00:20:20.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:20.290 "listen_address": { 00:20:20.290 "trtype": "TCP", 00:20:20.290 "adrfam": "IPv4", 00:20:20.291 "traddr": "10.0.0.2", 00:20:20.291 "trsvcid": "4420" 00:20:20.291 }, 00:20:20.291 "peer_address": { 00:20:20.291 "trtype": "TCP", 00:20:20.291 "adrfam": "IPv4", 00:20:20.291 "traddr": "10.0.0.1", 00:20:20.291 "trsvcid": "33882" 00:20:20.291 }, 00:20:20.291 "auth": { 00:20:20.291 "state": "completed", 00:20:20.291 "digest": "sha384", 00:20:20.291 "dhgroup": "ffdhe3072" 00:20:20.291 } 00:20:20.291 } 00:20:20.291 ]' 00:20:20.291 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.291 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.291 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.291 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:20.291 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.291 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.291 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.291 21:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.549 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:20:20.549 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:20:21.482 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.482 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.482 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.482 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.482 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.482 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.482 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:21.482 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:22.048 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:22.048 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.048 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:22.048 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:22.048 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:22.048 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.048 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:22.048 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.048 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.048 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.048 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:22.048 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:22.048 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:22.305 00:20:22.305 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.305 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.306 21:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.564 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.564 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.564 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.564 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.564 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.564 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.564 { 00:20:22.564 "cntlid": 71, 00:20:22.564 "qid": 0, 00:20:22.564 "state": "enabled", 00:20:22.564 "thread": "nvmf_tgt_poll_group_000", 00:20:22.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:22.564 "listen_address": { 00:20:22.564 "trtype": "TCP", 00:20:22.564 "adrfam": "IPv4", 00:20:22.564 "traddr": "10.0.0.2", 00:20:22.564 "trsvcid": "4420" 00:20:22.564 }, 00:20:22.564 "peer_address": { 00:20:22.564 "trtype": "TCP", 00:20:22.564 "adrfam": "IPv4", 00:20:22.564 "traddr": "10.0.0.1", 00:20:22.564 "trsvcid": "33906" 00:20:22.564 }, 00:20:22.564 "auth": { 00:20:22.564 "state": "completed", 00:20:22.564 "digest": "sha384", 00:20:22.564 "dhgroup": "ffdhe3072" 00:20:22.564 } 00:20:22.564 } 00:20:22.564 ]' 00:20:22.564 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.564 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.564 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.564 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:22.564 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.564 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.564 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.564 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.129 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:20:23.129 21:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:20:24.061 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.061 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.061 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.061 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.061 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.061 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.061 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.061 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:24.061 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:24.319 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:24.319 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.319 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:24.319 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:24.319 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:24.319 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.319 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.319 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.319 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.319 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.319 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.319 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.319 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.595 00:20:24.595 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.595 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.595 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.904 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.904 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.904 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.904 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.904 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.904 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.904 { 00:20:24.904 "cntlid": 73, 00:20:24.904 "qid": 0, 00:20:24.904 "state": "enabled", 00:20:24.904 "thread": "nvmf_tgt_poll_group_000", 00:20:24.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:24.904 "listen_address": { 00:20:24.904 "trtype": "TCP", 00:20:24.904 "adrfam": "IPv4", 00:20:24.904 "traddr": "10.0.0.2", 00:20:24.904 "trsvcid": "4420" 00:20:24.904 }, 00:20:24.904 "peer_address": { 00:20:24.904 "trtype": "TCP", 00:20:24.904 "adrfam": "IPv4", 00:20:24.904 "traddr": "10.0.0.1", 00:20:24.904 "trsvcid": "33934" 00:20:24.904 }, 00:20:24.904 "auth": { 00:20:24.904 "state": "completed", 00:20:24.904 "digest": "sha384", 00:20:24.904 "dhgroup": "ffdhe4096" 00:20:24.904 } 00:20:24.904 } 00:20:24.904 ]' 00:20:24.904 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.904 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.904 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.904 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:25.185 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.185 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.185 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.185 21:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.443 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:20:25.443 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:20:26.375 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.375 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.375 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.375 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.375 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.375 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.375 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:26.375 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:26.632 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:26.633 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.633 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:26.633 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:26.633 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:26.633 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.633 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.633 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.633 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.633 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.633 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.633 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.633 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.198 00:20:27.199 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.199 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.199 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.199 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.199 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.199 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.199 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.199 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.199 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.199 { 00:20:27.199 "cntlid": 75, 00:20:27.199 "qid": 0, 00:20:27.199 "state": "enabled", 00:20:27.199 "thread": "nvmf_tgt_poll_group_000", 00:20:27.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:27.199 "listen_address": { 00:20:27.199 "trtype": "TCP", 00:20:27.199 "adrfam": "IPv4", 00:20:27.199 "traddr": "10.0.0.2", 00:20:27.199 "trsvcid": "4420" 00:20:27.199 }, 00:20:27.199 "peer_address": { 00:20:27.199 "trtype": "TCP", 00:20:27.199 "adrfam": "IPv4", 00:20:27.199 "traddr": "10.0.0.1", 00:20:27.199 "trsvcid": "49732" 00:20:27.199 }, 00:20:27.199 "auth": { 00:20:27.199 "state": "completed", 00:20:27.199 "digest": "sha384", 00:20:27.199 "dhgroup": "ffdhe4096" 00:20:27.199 } 00:20:27.199 } 00:20:27.199 ]' 00:20:27.199 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.456 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.456 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.456 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:27.456 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.456 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.456 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.457 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.714 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:20:27.714 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:20:28.647 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.647 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.647 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.647 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.647 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.647 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.647 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:28.647 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:28.905 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:28.905 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.905 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.905 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:28.905 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:28.905 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.905 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.905 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.905 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.905 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.905 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.905 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.905 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.470 00:20:29.470 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.470 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.471 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.728 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.728 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.728 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.728 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.728 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.728 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.728 { 00:20:29.728 "cntlid": 77, 00:20:29.728 "qid": 0, 00:20:29.728 "state": "enabled", 00:20:29.728 "thread": "nvmf_tgt_poll_group_000", 00:20:29.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:29.728 "listen_address": { 00:20:29.728 "trtype": "TCP", 00:20:29.728 "adrfam": "IPv4", 00:20:29.728 "traddr": "10.0.0.2", 00:20:29.728 "trsvcid": "4420" 00:20:29.728 }, 00:20:29.728 "peer_address": { 00:20:29.728 "trtype": "TCP", 00:20:29.728 "adrfam": "IPv4", 00:20:29.728 "traddr": "10.0.0.1", 00:20:29.728 "trsvcid": "49748" 00:20:29.728 }, 00:20:29.728 "auth": { 00:20:29.728 "state": "completed", 00:20:29.728 "digest": "sha384", 00:20:29.728 "dhgroup": "ffdhe4096" 00:20:29.728 } 00:20:29.728 } 00:20:29.728 ]' 00:20:29.728 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.728 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.728 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.728 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:29.728 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.728 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.728 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.729 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.987 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:20:29.987 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:20:30.920 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.920 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.920 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.920 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.920 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.920 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.920 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.920 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:31.486 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:31.486 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.486 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.486 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:31.486 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:31.486 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.486 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:31.486 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.486 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.486 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.486 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:31.486 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.486 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.743 00:20:31.743 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.743 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.744 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.002 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.002 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.002 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.002 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.002 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.002 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.002 { 00:20:32.002 "cntlid": 79, 00:20:32.002 "qid": 0, 00:20:32.002 "state": "enabled", 00:20:32.002 "thread": "nvmf_tgt_poll_group_000", 00:20:32.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:32.002 "listen_address": { 00:20:32.002 "trtype": "TCP", 00:20:32.002 "adrfam": "IPv4", 00:20:32.002 "traddr": "10.0.0.2", 00:20:32.002 "trsvcid": "4420" 00:20:32.002 }, 00:20:32.002 "peer_address": { 00:20:32.002 "trtype": "TCP", 00:20:32.002 "adrfam": "IPv4", 00:20:32.002 "traddr": "10.0.0.1", 00:20:32.002 "trsvcid": "49772" 00:20:32.002 }, 00:20:32.002 "auth": { 00:20:32.002 "state": "completed", 00:20:32.002 "digest": "sha384", 00:20:32.002 "dhgroup": "ffdhe4096" 00:20:32.002 } 00:20:32.002 } 00:20:32.002 ]' 00:20:32.002 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.002 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.002 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.002 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:32.002 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.002 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.002 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.002 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.569 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:20:32.569 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:20:33.502 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.502 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.502 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.502 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.502 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.502 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.502 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.502 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.502 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.759 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:33.759 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.759 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.759 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:33.759 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:33.759 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.759 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.759 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.759 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.759 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.759 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.759 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.759 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.325 00:20:34.325 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.325 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.325 21:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.583 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.583 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.583 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.583 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.583 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.583 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.583 { 00:20:34.583 "cntlid": 81, 00:20:34.583 "qid": 0, 00:20:34.583 "state": "enabled", 00:20:34.583 "thread": "nvmf_tgt_poll_group_000", 00:20:34.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:34.583 "listen_address": { 00:20:34.583 "trtype": "TCP", 00:20:34.583 "adrfam": "IPv4", 00:20:34.583 "traddr": "10.0.0.2", 00:20:34.583 "trsvcid": "4420" 00:20:34.583 }, 00:20:34.583 "peer_address": { 00:20:34.583 "trtype": "TCP", 00:20:34.583 "adrfam": "IPv4", 00:20:34.583 "traddr": "10.0.0.1", 00:20:34.583 "trsvcid": "49810" 00:20:34.583 }, 00:20:34.583 "auth": { 00:20:34.583 "state": "completed", 00:20:34.583 "digest": "sha384", 00:20:34.583 "dhgroup": "ffdhe6144" 00:20:34.583 } 00:20:34.583 } 00:20:34.583 ]' 00:20:34.583 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.583 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.583 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.583 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:34.583 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.583 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.583 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.583 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.840 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:20:34.840 21:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:20:35.772 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.772 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.772 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.772 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.772 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.772 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.772 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:35.772 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:36.339 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:36.339 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.339 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.339 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:36.339 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:36.339 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.339 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.339 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.339 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.339 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.339 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.339 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.339 21:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.905 00:20:36.905 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.905 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.905 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.162 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.162 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.162 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.162 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.162 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.162 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.162 { 00:20:37.162 "cntlid": 83, 00:20:37.162 "qid": 0, 00:20:37.162 "state": "enabled", 00:20:37.162 "thread": "nvmf_tgt_poll_group_000", 00:20:37.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:37.163 "listen_address": { 00:20:37.163 "trtype": "TCP", 00:20:37.163 "adrfam": "IPv4", 00:20:37.163 "traddr": "10.0.0.2", 00:20:37.163 "trsvcid": "4420" 00:20:37.163 }, 00:20:37.163 "peer_address": { 00:20:37.163 "trtype": "TCP", 00:20:37.163 "adrfam": "IPv4", 00:20:37.163 "traddr": "10.0.0.1", 00:20:37.163 "trsvcid": "45684" 00:20:37.163 }, 00:20:37.163 "auth": { 00:20:37.163 "state": "completed", 00:20:37.163 "digest": "sha384", 00:20:37.163 "dhgroup": "ffdhe6144" 00:20:37.163 } 00:20:37.163 } 00:20:37.163 ]' 00:20:37.163 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.163 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.163 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.163 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:37.163 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.163 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.163 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.163 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.421 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:20:37.421 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:20:38.356 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.356 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.356 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.356 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.356 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.356 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.356 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.356 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.614 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:38.614 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.614 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.614 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:38.614 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:38.614 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.614 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.614 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.614 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.614 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.614 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.614 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.614 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.180 00:20:39.180 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.180 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.180 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.438 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.438 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.438 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.438 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.438 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.438 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.438 { 00:20:39.438 "cntlid": 85, 00:20:39.438 "qid": 0, 00:20:39.438 "state": "enabled", 00:20:39.438 "thread": "nvmf_tgt_poll_group_000", 00:20:39.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:39.438 "listen_address": { 00:20:39.438 "trtype": "TCP", 00:20:39.438 "adrfam": "IPv4", 00:20:39.438 "traddr": "10.0.0.2", 00:20:39.438 "trsvcid": "4420" 00:20:39.438 }, 00:20:39.438 "peer_address": { 00:20:39.438 "trtype": "TCP", 00:20:39.438 "adrfam": "IPv4", 00:20:39.438 "traddr": "10.0.0.1", 00:20:39.438 "trsvcid": "45710" 00:20:39.438 }, 00:20:39.438 "auth": { 00:20:39.438 "state": "completed", 00:20:39.438 "digest": "sha384", 00:20:39.438 "dhgroup": "ffdhe6144" 00:20:39.438 } 00:20:39.438 } 00:20:39.438 ]' 00:20:39.438 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.438 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.438 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.697 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:39.697 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.697 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.697 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.697 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.954 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:20:39.954 21:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:20:40.887 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.887 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.887 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.887 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.887 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.887 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.887 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:40.887 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.145 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:41.145 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.145 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.145 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:41.145 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:41.145 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.145 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:41.145 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.145 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.145 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.145 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:41.145 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.145 21:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.711 00:20:41.711 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.711 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.711 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.969 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.969 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.969 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.969 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.969 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.969 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.969 { 00:20:41.969 "cntlid": 87, 00:20:41.969 "qid": 0, 00:20:41.969 "state": "enabled", 00:20:41.969 "thread": "nvmf_tgt_poll_group_000", 00:20:41.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:41.969 "listen_address": { 00:20:41.969 "trtype": "TCP", 00:20:41.969 "adrfam": "IPv4", 00:20:41.969 "traddr": "10.0.0.2", 00:20:41.969 "trsvcid": "4420" 00:20:41.969 }, 00:20:41.969 "peer_address": { 00:20:41.969 "trtype": "TCP", 00:20:41.969 "adrfam": "IPv4", 00:20:41.969 "traddr": "10.0.0.1", 00:20:41.969 "trsvcid": "45724" 00:20:41.969 }, 00:20:41.969 "auth": { 00:20:41.969 "state": "completed", 00:20:41.969 "digest": "sha384", 00:20:41.969 "dhgroup": "ffdhe6144" 00:20:41.969 } 00:20:41.969 } 00:20:41.969 ]' 00:20:41.969 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.969 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.969 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.227 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.227 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.227 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.227 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.227 21:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.486 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:20:42.486 21:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:20:43.419 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.419 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.419 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.419 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.419 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.419 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.419 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.419 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:43.419 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:43.677 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:43.677 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.677 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.677 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:43.677 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:43.677 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.677 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.677 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.677 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.677 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.677 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.677 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.677 21:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.611 00:20:44.611 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.611 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.611 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.869 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.869 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.869 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.869 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.869 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.869 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.869 { 00:20:44.869 "cntlid": 89, 00:20:44.869 "qid": 0, 00:20:44.869 "state": "enabled", 00:20:44.869 "thread": "nvmf_tgt_poll_group_000", 00:20:44.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:44.869 "listen_address": { 00:20:44.869 "trtype": "TCP", 00:20:44.869 "adrfam": "IPv4", 00:20:44.869 "traddr": "10.0.0.2", 00:20:44.869 "trsvcid": "4420" 00:20:44.869 }, 00:20:44.869 "peer_address": { 00:20:44.869 "trtype": "TCP", 00:20:44.869 "adrfam": "IPv4", 00:20:44.869 "traddr": "10.0.0.1", 00:20:44.869 "trsvcid": "45764" 00:20:44.869 }, 00:20:44.869 "auth": { 00:20:44.869 "state": "completed", 00:20:44.869 "digest": "sha384", 00:20:44.869 "dhgroup": "ffdhe8192" 00:20:44.869 } 00:20:44.869 } 00:20:44.869 ]' 00:20:44.869 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.869 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.869 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.127 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:45.127 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.127 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.127 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.127 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.385 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:20:45.385 21:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:20:46.318 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.318 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.318 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.318 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.318 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.318 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.318 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:46.318 21:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:46.576 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:46.576 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.576 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.576 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:46.576 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:46.576 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.576 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.576 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.576 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.576 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.576 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.576 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.576 21:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.510 00:20:47.510 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.510 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.510 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.769 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.769 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.769 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.769 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.769 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.769 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.769 { 00:20:47.769 "cntlid": 91, 00:20:47.769 "qid": 0, 00:20:47.769 "state": "enabled", 00:20:47.769 "thread": "nvmf_tgt_poll_group_000", 00:20:47.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:47.769 "listen_address": { 00:20:47.769 "trtype": "TCP", 00:20:47.769 "adrfam": "IPv4", 00:20:47.769 "traddr": "10.0.0.2", 00:20:47.769 "trsvcid": "4420" 00:20:47.769 }, 00:20:47.769 "peer_address": { 00:20:47.769 "trtype": "TCP", 00:20:47.769 "adrfam": "IPv4", 00:20:47.769 "traddr": "10.0.0.1", 00:20:47.769 "trsvcid": "45498" 00:20:47.769 }, 00:20:47.769 "auth": { 00:20:47.769 "state": "completed", 00:20:47.769 "digest": "sha384", 00:20:47.769 "dhgroup": "ffdhe8192" 00:20:47.769 } 00:20:47.769 } 00:20:47.769 ]' 00:20:47.769 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.769 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.769 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.769 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:47.769 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.769 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.769 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.769 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.335 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:20:48.335 21:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:20:49.269 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.269 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.269 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.269 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.269 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.269 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.269 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:49.269 21:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:49.527 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:49.527 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.527 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.527 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:49.527 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:49.527 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.527 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.527 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.527 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.527 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.527 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.527 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.527 21:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.461 00:20:50.461 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.461 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.461 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.719 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.720 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.720 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.720 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.720 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.720 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.720 { 00:20:50.720 "cntlid": 93, 00:20:50.720 "qid": 0, 00:20:50.720 "state": "enabled", 00:20:50.720 "thread": "nvmf_tgt_poll_group_000", 00:20:50.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:50.720 "listen_address": { 00:20:50.720 "trtype": "TCP", 00:20:50.720 "adrfam": "IPv4", 00:20:50.720 "traddr": "10.0.0.2", 00:20:50.720 "trsvcid": "4420" 00:20:50.720 }, 00:20:50.720 "peer_address": { 00:20:50.720 "trtype": "TCP", 00:20:50.720 "adrfam": "IPv4", 00:20:50.720 "traddr": "10.0.0.1", 00:20:50.720 "trsvcid": "45518" 00:20:50.720 }, 00:20:50.720 "auth": { 00:20:50.720 "state": "completed", 00:20:50.720 "digest": "sha384", 00:20:50.720 "dhgroup": "ffdhe8192" 00:20:50.720 } 00:20:50.720 } 00:20:50.720 ]' 00:20:50.720 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.720 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.720 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.720 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:50.720 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.720 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.720 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.720 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.978 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:20:50.978 21:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:20:51.911 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.911 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.911 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.911 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.912 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.912 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.912 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.912 21:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.478 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:52.478 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.478 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.478 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:52.478 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:52.478 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.478 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:52.478 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.478 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.478 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.478 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:52.478 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.478 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.413 00:20:53.413 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.413 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.413 21:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.671 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.671 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.671 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.671 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.671 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.671 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.671 { 00:20:53.671 "cntlid": 95, 00:20:53.671 "qid": 0, 00:20:53.671 "state": "enabled", 00:20:53.671 "thread": "nvmf_tgt_poll_group_000", 00:20:53.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:53.671 "listen_address": { 00:20:53.671 "trtype": "TCP", 00:20:53.671 "adrfam": "IPv4", 00:20:53.671 "traddr": "10.0.0.2", 00:20:53.671 "trsvcid": "4420" 00:20:53.671 }, 00:20:53.671 "peer_address": { 00:20:53.671 "trtype": "TCP", 00:20:53.671 "adrfam": "IPv4", 00:20:53.671 "traddr": "10.0.0.1", 00:20:53.671 "trsvcid": "45552" 00:20:53.671 }, 00:20:53.671 "auth": { 00:20:53.671 "state": "completed", 00:20:53.671 "digest": "sha384", 00:20:53.671 "dhgroup": "ffdhe8192" 00:20:53.671 } 00:20:53.671 } 00:20:53.671 ]' 00:20:53.671 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.671 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.671 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.671 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:53.671 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.671 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.671 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.671 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.928 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:20:53.928 21:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:20:54.861 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.861 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.861 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.861 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.861 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.861 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:54.861 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.861 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.861 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:54.861 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:55.149 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:55.149 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.149 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:55.149 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:55.149 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:55.149 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.149 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.149 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.149 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.149 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.149 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.149 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.149 21:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.432 00:20:55.432 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.432 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.432 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.689 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.689 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.689 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.689 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.689 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.689 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.689 { 00:20:55.689 "cntlid": 97, 00:20:55.689 "qid": 0, 00:20:55.689 "state": "enabled", 00:20:55.689 "thread": "nvmf_tgt_poll_group_000", 00:20:55.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:55.689 "listen_address": { 00:20:55.689 "trtype": "TCP", 00:20:55.689 "adrfam": "IPv4", 00:20:55.689 "traddr": "10.0.0.2", 00:20:55.689 "trsvcid": "4420" 00:20:55.689 }, 00:20:55.689 "peer_address": { 00:20:55.689 "trtype": "TCP", 00:20:55.689 "adrfam": "IPv4", 00:20:55.689 "traddr": "10.0.0.1", 00:20:55.689 "trsvcid": "45570" 00:20:55.689 }, 00:20:55.689 "auth": { 00:20:55.689 "state": "completed", 00:20:55.689 "digest": "sha512", 00:20:55.689 "dhgroup": "null" 00:20:55.689 } 00:20:55.689 } 00:20:55.689 ]' 00:20:55.689 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.946 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.946 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.946 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:55.946 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.946 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.946 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.946 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.203 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:20:56.203 21:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:20:57.137 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.137 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.137 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.137 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.137 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.137 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.137 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:57.137 21:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:57.395 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:57.395 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.395 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:57.395 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:57.395 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:57.395 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.395 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.395 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.395 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.395 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.395 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.395 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.395 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.962 00:20:57.962 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.962 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.962 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.962 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.962 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.962 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.962 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.221 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.221 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.221 { 00:20:58.221 "cntlid": 99, 00:20:58.221 "qid": 0, 00:20:58.221 "state": "enabled", 00:20:58.221 "thread": "nvmf_tgt_poll_group_000", 00:20:58.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:58.221 "listen_address": { 00:20:58.221 "trtype": "TCP", 00:20:58.221 "adrfam": "IPv4", 00:20:58.221 "traddr": "10.0.0.2", 00:20:58.221 "trsvcid": "4420" 00:20:58.221 }, 00:20:58.221 "peer_address": { 00:20:58.221 "trtype": "TCP", 00:20:58.221 "adrfam": "IPv4", 00:20:58.221 "traddr": "10.0.0.1", 00:20:58.221 "trsvcid": "36772" 00:20:58.221 }, 00:20:58.221 "auth": { 00:20:58.221 "state": "completed", 00:20:58.221 "digest": "sha512", 00:20:58.221 "dhgroup": "null" 00:20:58.221 } 00:20:58.221 } 00:20:58.221 ]' 00:20:58.221 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.221 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.221 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.221 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:58.221 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.221 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.221 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.221 21:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.479 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:20:58.480 21:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:20:59.414 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.414 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.414 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.414 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.414 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.414 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.414 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.414 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.672 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:59.672 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.672 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:59.672 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:59.672 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:59.672 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.672 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.672 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.672 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.672 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.672 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.672 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.672 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.238 00:21:00.238 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.238 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.238 21:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.496 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.496 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.496 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.496 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.496 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.496 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.496 { 00:21:00.496 "cntlid": 101, 00:21:00.496 "qid": 0, 00:21:00.496 "state": "enabled", 00:21:00.496 "thread": "nvmf_tgt_poll_group_000", 00:21:00.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:00.496 "listen_address": { 00:21:00.496 "trtype": "TCP", 00:21:00.496 "adrfam": "IPv4", 00:21:00.496 "traddr": "10.0.0.2", 00:21:00.496 "trsvcid": "4420" 00:21:00.496 }, 00:21:00.496 "peer_address": { 00:21:00.496 "trtype": "TCP", 00:21:00.496 "adrfam": "IPv4", 00:21:00.496 "traddr": "10.0.0.1", 00:21:00.496 "trsvcid": "36790" 00:21:00.496 }, 00:21:00.496 "auth": { 00:21:00.496 "state": "completed", 00:21:00.496 "digest": "sha512", 00:21:00.496 "dhgroup": "null" 00:21:00.496 } 00:21:00.496 } 00:21:00.496 ]' 00:21:00.496 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.496 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.496 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.496 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:00.496 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.496 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.496 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.496 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.754 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:21:00.754 21:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:21:01.687 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.687 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.687 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.687 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.687 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.687 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.687 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:01.687 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:01.945 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:01.945 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.945 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:01.945 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:01.945 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:01.945 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.945 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:01.945 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.945 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.945 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.945 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:01.945 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:01.945 21:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:02.511 00:21:02.511 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.511 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.511 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.769 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.769 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.770 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.770 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.770 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.770 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.770 { 00:21:02.770 "cntlid": 103, 00:21:02.770 "qid": 0, 00:21:02.770 "state": "enabled", 00:21:02.770 "thread": "nvmf_tgt_poll_group_000", 00:21:02.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:02.770 "listen_address": { 00:21:02.770 "trtype": "TCP", 00:21:02.770 "adrfam": "IPv4", 00:21:02.770 "traddr": "10.0.0.2", 00:21:02.770 "trsvcid": "4420" 00:21:02.770 }, 00:21:02.770 "peer_address": { 00:21:02.770 "trtype": "TCP", 00:21:02.770 "adrfam": "IPv4", 00:21:02.770 "traddr": "10.0.0.1", 00:21:02.770 "trsvcid": "36822" 00:21:02.770 }, 00:21:02.770 "auth": { 00:21:02.770 "state": "completed", 00:21:02.770 "digest": "sha512", 00:21:02.770 "dhgroup": "null" 00:21:02.770 } 00:21:02.770 } 00:21:02.770 ]' 00:21:02.770 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.770 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.770 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.770 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:02.770 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.770 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.770 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.770 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.028 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:21:03.028 21:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:21:04.402 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.402 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.402 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.402 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.402 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.402 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.402 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.402 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.402 21:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.402 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:04.402 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.402 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:04.402 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:04.402 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:04.402 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.402 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.402 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.402 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.402 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.402 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.402 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.402 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.661 00:21:04.918 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.918 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.918 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.176 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.176 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.176 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.176 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.176 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.176 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.176 { 00:21:05.176 "cntlid": 105, 00:21:05.176 "qid": 0, 00:21:05.176 "state": "enabled", 00:21:05.176 "thread": "nvmf_tgt_poll_group_000", 00:21:05.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:05.176 "listen_address": { 00:21:05.176 "trtype": "TCP", 00:21:05.176 "adrfam": "IPv4", 00:21:05.176 "traddr": "10.0.0.2", 00:21:05.176 "trsvcid": "4420" 00:21:05.176 }, 00:21:05.176 "peer_address": { 00:21:05.176 "trtype": "TCP", 00:21:05.176 "adrfam": "IPv4", 00:21:05.176 "traddr": "10.0.0.1", 00:21:05.176 "trsvcid": "36848" 00:21:05.176 }, 00:21:05.176 "auth": { 00:21:05.176 "state": "completed", 00:21:05.176 "digest": "sha512", 00:21:05.176 "dhgroup": "ffdhe2048" 00:21:05.176 } 00:21:05.176 } 00:21:05.176 ]' 00:21:05.176 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.176 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.176 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.176 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:05.176 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.176 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.176 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.176 21:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.435 21:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:21:05.435 21:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:21:06.367 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.367 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.367 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.367 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.367 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.367 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.367 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.367 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.932 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:06.932 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.932 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.932 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:06.932 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:06.932 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.932 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.932 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.932 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.932 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.932 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.932 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.932 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.190 00:21:07.190 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.190 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.190 21:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.448 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.448 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.448 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.448 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.448 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.448 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.448 { 00:21:07.448 "cntlid": 107, 00:21:07.448 "qid": 0, 00:21:07.448 "state": "enabled", 00:21:07.448 "thread": "nvmf_tgt_poll_group_000", 00:21:07.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:07.448 "listen_address": { 00:21:07.448 "trtype": "TCP", 00:21:07.448 "adrfam": "IPv4", 00:21:07.448 "traddr": "10.0.0.2", 00:21:07.448 "trsvcid": "4420" 00:21:07.448 }, 00:21:07.448 "peer_address": { 00:21:07.448 "trtype": "TCP", 00:21:07.448 "adrfam": "IPv4", 00:21:07.448 "traddr": "10.0.0.1", 00:21:07.448 "trsvcid": "45822" 00:21:07.448 }, 00:21:07.448 "auth": { 00:21:07.448 "state": "completed", 00:21:07.448 "digest": "sha512", 00:21:07.448 "dhgroup": "ffdhe2048" 00:21:07.448 } 00:21:07.448 } 00:21:07.448 ]' 00:21:07.448 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.448 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.448 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.448 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:07.448 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.448 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.448 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.449 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.706 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:21:07.706 21:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:21:08.639 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.639 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.639 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.639 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.639 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.639 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.639 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:08.639 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:08.897 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:08.897 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.897 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.897 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:08.897 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:08.897 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.897 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.897 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.897 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.897 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.897 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.897 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.897 21:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.463 00:21:09.463 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.463 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.463 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.721 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.721 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.722 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.722 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.722 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.722 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.722 { 00:21:09.722 "cntlid": 109, 00:21:09.722 "qid": 0, 00:21:09.722 "state": "enabled", 00:21:09.722 "thread": "nvmf_tgt_poll_group_000", 00:21:09.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:09.722 "listen_address": { 00:21:09.722 "trtype": "TCP", 00:21:09.722 "adrfam": "IPv4", 00:21:09.722 "traddr": "10.0.0.2", 00:21:09.722 "trsvcid": "4420" 00:21:09.722 }, 00:21:09.722 "peer_address": { 00:21:09.722 "trtype": "TCP", 00:21:09.722 "adrfam": "IPv4", 00:21:09.722 "traddr": "10.0.0.1", 00:21:09.722 "trsvcid": "45834" 00:21:09.722 }, 00:21:09.722 "auth": { 00:21:09.722 "state": "completed", 00:21:09.722 "digest": "sha512", 00:21:09.722 "dhgroup": "ffdhe2048" 00:21:09.722 } 00:21:09.722 } 00:21:09.722 ]' 00:21:09.722 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.722 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.722 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.722 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:09.722 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.722 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.722 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.722 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.981 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:21:09.981 21:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:21:10.915 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.915 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.915 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.915 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.915 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.915 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.915 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.915 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:11.173 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:11.173 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.173 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.173 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:11.173 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:11.173 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.173 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:11.173 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.173 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.173 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.173 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:11.173 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.173 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.739 00:21:11.739 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.739 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.739 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.997 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.997 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.997 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.997 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.997 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.997 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.997 { 00:21:11.997 "cntlid": 111, 00:21:11.997 "qid": 0, 00:21:11.997 "state": "enabled", 00:21:11.997 "thread": "nvmf_tgt_poll_group_000", 00:21:11.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:11.997 "listen_address": { 00:21:11.997 "trtype": "TCP", 00:21:11.997 "adrfam": "IPv4", 00:21:11.997 "traddr": "10.0.0.2", 00:21:11.997 "trsvcid": "4420" 00:21:11.997 }, 00:21:11.997 "peer_address": { 00:21:11.997 "trtype": "TCP", 00:21:11.997 "adrfam": "IPv4", 00:21:11.997 "traddr": "10.0.0.1", 00:21:11.997 "trsvcid": "45866" 00:21:11.997 }, 00:21:11.997 "auth": { 00:21:11.997 "state": "completed", 00:21:11.997 "digest": "sha512", 00:21:11.997 "dhgroup": "ffdhe2048" 00:21:11.997 } 00:21:11.997 } 00:21:11.997 ]' 00:21:11.997 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.997 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.997 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.997 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:11.997 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.997 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.997 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.997 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.255 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:21:12.255 21:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:21:13.189 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.189 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.189 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.189 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.189 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.189 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.189 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.189 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:13.189 21:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:13.447 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:13.447 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.447 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.447 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:13.447 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:13.447 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.447 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.447 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.447 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.447 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.447 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.447 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.447 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.013 00:21:14.013 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.013 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.013 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.272 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.272 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.272 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.272 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.272 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.272 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.272 { 00:21:14.272 "cntlid": 113, 00:21:14.272 "qid": 0, 00:21:14.272 "state": "enabled", 00:21:14.272 "thread": "nvmf_tgt_poll_group_000", 00:21:14.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:14.272 "listen_address": { 00:21:14.272 "trtype": "TCP", 00:21:14.272 "adrfam": "IPv4", 00:21:14.272 "traddr": "10.0.0.2", 00:21:14.272 "trsvcid": "4420" 00:21:14.272 }, 00:21:14.272 "peer_address": { 00:21:14.272 "trtype": "TCP", 00:21:14.272 "adrfam": "IPv4", 00:21:14.272 "traddr": "10.0.0.1", 00:21:14.272 "trsvcid": "45902" 00:21:14.272 }, 00:21:14.272 "auth": { 00:21:14.272 "state": "completed", 00:21:14.272 "digest": "sha512", 00:21:14.272 "dhgroup": "ffdhe3072" 00:21:14.272 } 00:21:14.272 } 00:21:14.272 ]' 00:21:14.272 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.272 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.272 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.272 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:14.272 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.272 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.272 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.272 21:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.530 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:21:14.530 21:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:21:15.463 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.463 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.463 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.463 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.463 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.463 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.463 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:15.463 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:15.721 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:15.721 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.721 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.721 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:15.721 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:15.721 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.721 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.721 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.721 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.721 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.721 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.721 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.721 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.287 00:21:16.287 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.287 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.287 21:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.545 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.545 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.545 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.545 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.545 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.545 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.545 { 00:21:16.545 "cntlid": 115, 00:21:16.545 "qid": 0, 00:21:16.545 "state": "enabled", 00:21:16.545 "thread": "nvmf_tgt_poll_group_000", 00:21:16.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:16.545 "listen_address": { 00:21:16.545 "trtype": "TCP", 00:21:16.545 "adrfam": "IPv4", 00:21:16.545 "traddr": "10.0.0.2", 00:21:16.545 "trsvcid": "4420" 00:21:16.545 }, 00:21:16.545 "peer_address": { 00:21:16.545 "trtype": "TCP", 00:21:16.545 "adrfam": "IPv4", 00:21:16.545 "traddr": "10.0.0.1", 00:21:16.545 "trsvcid": "33784" 00:21:16.545 }, 00:21:16.545 "auth": { 00:21:16.545 "state": "completed", 00:21:16.545 "digest": "sha512", 00:21:16.545 "dhgroup": "ffdhe3072" 00:21:16.545 } 00:21:16.545 } 00:21:16.545 ]' 00:21:16.545 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.545 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.545 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.545 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:16.545 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.545 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.545 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.545 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.803 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:21:16.803 21:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:21:17.738 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.996 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.996 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.996 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.996 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.996 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.996 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:17.996 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:18.255 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:18.255 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.255 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.255 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:18.255 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:18.255 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.255 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.255 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.255 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.255 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.255 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.255 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.255 21:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.512 00:21:18.512 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.512 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.512 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.771 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.771 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.771 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.771 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.771 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.771 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.771 { 00:21:18.771 "cntlid": 117, 00:21:18.771 "qid": 0, 00:21:18.771 "state": "enabled", 00:21:18.771 "thread": "nvmf_tgt_poll_group_000", 00:21:18.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:18.771 "listen_address": { 00:21:18.771 "trtype": "TCP", 00:21:18.771 "adrfam": "IPv4", 00:21:18.771 "traddr": "10.0.0.2", 00:21:18.771 "trsvcid": "4420" 00:21:18.771 }, 00:21:18.771 "peer_address": { 00:21:18.771 "trtype": "TCP", 00:21:18.771 "adrfam": "IPv4", 00:21:18.771 "traddr": "10.0.0.1", 00:21:18.771 "trsvcid": "33800" 00:21:18.771 }, 00:21:18.771 "auth": { 00:21:18.771 "state": "completed", 00:21:18.771 "digest": "sha512", 00:21:18.771 "dhgroup": "ffdhe3072" 00:21:18.771 } 00:21:18.771 } 00:21:18.771 ]' 00:21:18.771 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.771 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.771 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.771 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:18.771 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.027 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.027 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.027 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.284 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:21:19.284 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:21:20.218 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.218 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.218 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.218 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.218 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.218 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.218 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:20.218 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:20.477 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:20.477 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.477 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.477 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:20.477 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:20.477 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.477 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:20.477 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.477 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.477 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.477 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:20.477 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.477 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.735 00:21:20.735 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.735 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.994 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.252 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.252 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.252 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.252 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.252 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.252 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.252 { 00:21:21.252 "cntlid": 119, 00:21:21.252 "qid": 0, 00:21:21.252 "state": "enabled", 00:21:21.252 "thread": "nvmf_tgt_poll_group_000", 00:21:21.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:21.252 "listen_address": { 00:21:21.252 "trtype": "TCP", 00:21:21.252 "adrfam": "IPv4", 00:21:21.252 "traddr": "10.0.0.2", 00:21:21.252 "trsvcid": "4420" 00:21:21.252 }, 00:21:21.252 "peer_address": { 00:21:21.252 "trtype": "TCP", 00:21:21.252 "adrfam": "IPv4", 00:21:21.252 "traddr": "10.0.0.1", 00:21:21.252 "trsvcid": "33834" 00:21:21.252 }, 00:21:21.252 "auth": { 00:21:21.252 "state": "completed", 00:21:21.252 "digest": "sha512", 00:21:21.252 "dhgroup": "ffdhe3072" 00:21:21.252 } 00:21:21.252 } 00:21:21.252 ]' 00:21:21.252 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.252 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.252 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.252 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:21.252 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.252 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.252 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.252 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.511 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:21:21.511 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:21:22.445 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.445 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.445 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.445 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.445 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.445 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:22.445 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.445 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:22.445 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:23.012 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:23.012 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.012 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.012 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:23.012 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:23.012 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.012 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.012 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.012 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.012 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.012 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.012 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.012 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.270 00:21:23.270 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.270 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.270 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.528 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.528 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.528 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.528 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.528 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.528 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.528 { 00:21:23.528 "cntlid": 121, 00:21:23.528 "qid": 0, 00:21:23.528 "state": "enabled", 00:21:23.528 "thread": "nvmf_tgt_poll_group_000", 00:21:23.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:23.528 "listen_address": { 00:21:23.528 "trtype": "TCP", 00:21:23.528 "adrfam": "IPv4", 00:21:23.528 "traddr": "10.0.0.2", 00:21:23.528 "trsvcid": "4420" 00:21:23.528 }, 00:21:23.528 "peer_address": { 00:21:23.528 "trtype": "TCP", 00:21:23.528 "adrfam": "IPv4", 00:21:23.528 "traddr": "10.0.0.1", 00:21:23.528 "trsvcid": "33868" 00:21:23.528 }, 00:21:23.528 "auth": { 00:21:23.528 "state": "completed", 00:21:23.528 "digest": "sha512", 00:21:23.528 "dhgroup": "ffdhe4096" 00:21:23.528 } 00:21:23.528 } 00:21:23.528 ]' 00:21:23.528 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.786 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.786 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.786 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:23.786 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.786 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.786 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.786 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.044 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:21:24.044 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:21:24.989 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.989 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.989 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.989 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.989 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.989 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.989 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:24.989 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:25.315 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:25.315 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.315 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.315 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:25.315 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:25.315 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.315 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.315 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.315 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.315 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.315 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.315 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.315 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.601 00:21:25.601 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.601 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.601 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.859 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.859 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.859 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.859 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.117 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.117 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.117 { 00:21:26.117 "cntlid": 123, 00:21:26.117 "qid": 0, 00:21:26.117 "state": "enabled", 00:21:26.117 "thread": "nvmf_tgt_poll_group_000", 00:21:26.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:26.117 "listen_address": { 00:21:26.117 "trtype": "TCP", 00:21:26.117 "adrfam": "IPv4", 00:21:26.117 "traddr": "10.0.0.2", 00:21:26.117 "trsvcid": "4420" 00:21:26.117 }, 00:21:26.117 "peer_address": { 00:21:26.117 "trtype": "TCP", 00:21:26.117 "adrfam": "IPv4", 00:21:26.117 "traddr": "10.0.0.1", 00:21:26.117 "trsvcid": "33896" 00:21:26.117 }, 00:21:26.117 "auth": { 00:21:26.117 "state": "completed", 00:21:26.117 "digest": "sha512", 00:21:26.117 "dhgroup": "ffdhe4096" 00:21:26.117 } 00:21:26.117 } 00:21:26.117 ]' 00:21:26.117 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.117 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.117 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.117 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:26.117 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.117 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.117 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.117 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.375 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:21:26.375 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:21:27.309 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.309 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.309 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.309 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.309 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.309 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.309 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:27.309 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:27.568 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:27.568 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.568 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:27.568 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:27.568 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:27.568 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.568 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.568 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.568 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.568 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.568 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.568 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.568 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.135 00:21:28.135 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.135 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.135 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.393 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.393 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.393 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.393 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.393 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.393 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.393 { 00:21:28.393 "cntlid": 125, 00:21:28.393 "qid": 0, 00:21:28.393 "state": "enabled", 00:21:28.393 "thread": "nvmf_tgt_poll_group_000", 00:21:28.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:28.393 "listen_address": { 00:21:28.393 "trtype": "TCP", 00:21:28.393 "adrfam": "IPv4", 00:21:28.393 "traddr": "10.0.0.2", 00:21:28.393 "trsvcid": "4420" 00:21:28.393 }, 00:21:28.393 "peer_address": { 00:21:28.393 "trtype": "TCP", 00:21:28.393 "adrfam": "IPv4", 00:21:28.393 "traddr": "10.0.0.1", 00:21:28.393 "trsvcid": "57704" 00:21:28.393 }, 00:21:28.393 "auth": { 00:21:28.393 "state": "completed", 00:21:28.393 "digest": "sha512", 00:21:28.393 "dhgroup": "ffdhe4096" 00:21:28.393 } 00:21:28.393 } 00:21:28.393 ]' 00:21:28.393 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.393 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.393 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.393 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:28.393 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.393 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.393 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.393 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.960 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:21:28.960 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:21:29.894 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.894 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:29.894 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.894 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.894 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.894 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.894 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:29.894 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:30.152 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:30.152 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.152 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.152 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:30.152 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:30.152 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.152 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:30.152 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.152 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.152 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.152 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:30.152 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.152 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.717 00:21:30.717 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.717 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.717 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.975 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.975 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.975 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.975 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.975 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.975 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.975 { 00:21:30.975 "cntlid": 127, 00:21:30.975 "qid": 0, 00:21:30.975 "state": "enabled", 00:21:30.975 "thread": "nvmf_tgt_poll_group_000", 00:21:30.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:30.975 "listen_address": { 00:21:30.975 "trtype": "TCP", 00:21:30.975 "adrfam": "IPv4", 00:21:30.975 "traddr": "10.0.0.2", 00:21:30.975 "trsvcid": "4420" 00:21:30.975 }, 00:21:30.975 "peer_address": { 00:21:30.975 "trtype": "TCP", 00:21:30.975 "adrfam": "IPv4", 00:21:30.975 "traddr": "10.0.0.1", 00:21:30.975 "trsvcid": "57736" 00:21:30.975 }, 00:21:30.975 "auth": { 00:21:30.975 "state": "completed", 00:21:30.975 "digest": "sha512", 00:21:30.975 "dhgroup": "ffdhe4096" 00:21:30.975 } 00:21:30.975 } 00:21:30.975 ]' 00:21:30.975 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.975 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.975 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.975 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:30.975 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.975 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.975 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.975 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.233 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:21:31.233 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:21:32.169 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.169 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.169 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.169 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.169 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.169 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:32.169 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.169 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:32.169 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:32.427 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:32.427 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.427 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.427 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:32.427 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:32.427 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.427 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.427 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.427 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.427 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.427 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.427 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.427 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.993 00:21:33.251 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.251 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.251 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.510 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.510 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.510 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.510 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.510 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.510 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.510 { 00:21:33.510 "cntlid": 129, 00:21:33.510 "qid": 0, 00:21:33.510 "state": "enabled", 00:21:33.510 "thread": "nvmf_tgt_poll_group_000", 00:21:33.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:33.510 "listen_address": { 00:21:33.510 "trtype": "TCP", 00:21:33.510 "adrfam": "IPv4", 00:21:33.510 "traddr": "10.0.0.2", 00:21:33.510 "trsvcid": "4420" 00:21:33.510 }, 00:21:33.510 "peer_address": { 00:21:33.510 "trtype": "TCP", 00:21:33.510 "adrfam": "IPv4", 00:21:33.510 "traddr": "10.0.0.1", 00:21:33.510 "trsvcid": "57762" 00:21:33.510 }, 00:21:33.510 "auth": { 00:21:33.510 "state": "completed", 00:21:33.510 "digest": "sha512", 00:21:33.510 "dhgroup": "ffdhe6144" 00:21:33.510 } 00:21:33.510 } 00:21:33.510 ]' 00:21:33.510 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.510 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.510 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.510 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:33.510 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.510 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.510 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.510 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.768 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:21:33.768 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:21:34.702 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.702 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.702 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.702 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.702 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.702 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.702 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:34.702 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:34.960 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:34.960 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.960 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.960 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:34.960 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:34.960 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.960 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.960 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.960 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.960 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.960 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.960 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.960 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.527 00:21:35.527 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.527 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.527 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.091 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.091 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.091 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.091 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.091 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.091 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.091 { 00:21:36.091 "cntlid": 131, 00:21:36.091 "qid": 0, 00:21:36.091 "state": "enabled", 00:21:36.091 "thread": "nvmf_tgt_poll_group_000", 00:21:36.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:36.091 "listen_address": { 00:21:36.091 "trtype": "TCP", 00:21:36.091 "adrfam": "IPv4", 00:21:36.091 "traddr": "10.0.0.2", 00:21:36.091 "trsvcid": "4420" 00:21:36.091 }, 00:21:36.091 "peer_address": { 00:21:36.091 "trtype": "TCP", 00:21:36.091 "adrfam": "IPv4", 00:21:36.091 "traddr": "10.0.0.1", 00:21:36.091 "trsvcid": "57804" 00:21:36.091 }, 00:21:36.091 "auth": { 00:21:36.091 "state": "completed", 00:21:36.091 "digest": "sha512", 00:21:36.091 "dhgroup": "ffdhe6144" 00:21:36.091 } 00:21:36.091 } 00:21:36.091 ]' 00:21:36.091 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.091 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.091 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.091 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:36.091 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.092 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.092 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.092 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.350 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:21:36.350 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:21:37.283 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.283 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.283 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.283 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.283 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.283 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.283 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:37.283 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:37.541 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:37.541 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.541 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.541 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:37.541 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:37.541 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.541 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.541 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.541 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.541 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.541 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.541 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.541 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.106 00:21:38.106 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.106 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.106 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.672 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.672 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.672 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.672 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.672 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.672 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.672 { 00:21:38.672 "cntlid": 133, 00:21:38.672 "qid": 0, 00:21:38.672 "state": "enabled", 00:21:38.672 "thread": "nvmf_tgt_poll_group_000", 00:21:38.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:38.672 "listen_address": { 00:21:38.672 "trtype": "TCP", 00:21:38.672 "adrfam": "IPv4", 00:21:38.672 "traddr": "10.0.0.2", 00:21:38.672 "trsvcid": "4420" 00:21:38.672 }, 00:21:38.672 "peer_address": { 00:21:38.672 "trtype": "TCP", 00:21:38.672 "adrfam": "IPv4", 00:21:38.672 "traddr": "10.0.0.1", 00:21:38.672 "trsvcid": "56452" 00:21:38.672 }, 00:21:38.672 "auth": { 00:21:38.672 "state": "completed", 00:21:38.672 "digest": "sha512", 00:21:38.672 "dhgroup": "ffdhe6144" 00:21:38.672 } 00:21:38.672 } 00:21:38.672 ]' 00:21:38.672 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.672 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.672 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.672 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:38.672 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.672 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.672 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.672 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.930 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:21:38.930 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:21:39.863 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.863 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.863 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.863 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.863 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.863 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.863 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:39.863 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.122 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:40.122 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.122 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.122 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:40.122 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:40.122 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.122 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:40.122 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.122 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.122 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.122 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:40.122 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.122 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.688 00:21:40.688 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.688 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.688 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.946 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.946 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.946 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.946 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.204 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.204 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.204 { 00:21:41.204 "cntlid": 135, 00:21:41.204 "qid": 0, 00:21:41.204 "state": "enabled", 00:21:41.204 "thread": "nvmf_tgt_poll_group_000", 00:21:41.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:41.204 "listen_address": { 00:21:41.204 "trtype": "TCP", 00:21:41.204 "adrfam": "IPv4", 00:21:41.204 "traddr": "10.0.0.2", 00:21:41.204 "trsvcid": "4420" 00:21:41.204 }, 00:21:41.204 "peer_address": { 00:21:41.204 "trtype": "TCP", 00:21:41.204 "adrfam": "IPv4", 00:21:41.204 "traddr": "10.0.0.1", 00:21:41.204 "trsvcid": "56482" 00:21:41.204 }, 00:21:41.204 "auth": { 00:21:41.204 "state": "completed", 00:21:41.204 "digest": "sha512", 00:21:41.204 "dhgroup": "ffdhe6144" 00:21:41.204 } 00:21:41.204 } 00:21:41.204 ]' 00:21:41.204 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.204 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.204 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.204 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:41.204 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.204 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.204 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.204 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.462 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:21:41.462 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:21:42.394 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.394 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.394 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.394 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.394 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.394 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:42.394 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.394 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:42.394 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:42.653 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:42.653 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.653 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.653 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:42.653 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:42.653 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.653 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.653 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.653 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.653 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.653 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.653 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.653 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.586 00:21:43.586 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.586 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.586 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.844 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.844 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.844 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.844 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.844 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.844 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.844 { 00:21:43.844 "cntlid": 137, 00:21:43.844 "qid": 0, 00:21:43.844 "state": "enabled", 00:21:43.844 "thread": "nvmf_tgt_poll_group_000", 00:21:43.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:43.844 "listen_address": { 00:21:43.844 "trtype": "TCP", 00:21:43.844 "adrfam": "IPv4", 00:21:43.844 "traddr": "10.0.0.2", 00:21:43.844 "trsvcid": "4420" 00:21:43.844 }, 00:21:43.844 "peer_address": { 00:21:43.844 "trtype": "TCP", 00:21:43.844 "adrfam": "IPv4", 00:21:43.844 "traddr": "10.0.0.1", 00:21:43.844 "trsvcid": "56498" 00:21:43.844 }, 00:21:43.844 "auth": { 00:21:43.844 "state": "completed", 00:21:43.844 "digest": "sha512", 00:21:43.844 "dhgroup": "ffdhe8192" 00:21:43.844 } 00:21:43.844 } 00:21:43.844 ]' 00:21:43.844 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.102 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.102 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.102 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:44.102 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.102 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.102 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.102 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.361 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:21:44.361 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:21:45.294 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.294 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.294 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.294 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.294 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.294 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.294 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:45.294 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:45.552 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:45.552 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.552 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.552 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:45.552 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:45.552 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.552 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.552 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.552 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.809 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.809 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.809 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.809 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.742 00:21:46.742 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.742 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.742 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.742 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.742 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.742 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.742 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.999 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.999 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.999 { 00:21:46.999 "cntlid": 139, 00:21:46.999 "qid": 0, 00:21:46.999 "state": "enabled", 00:21:46.999 "thread": "nvmf_tgt_poll_group_000", 00:21:46.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:46.999 "listen_address": { 00:21:46.999 "trtype": "TCP", 00:21:46.999 "adrfam": "IPv4", 00:21:46.999 "traddr": "10.0.0.2", 00:21:46.999 "trsvcid": "4420" 00:21:46.999 }, 00:21:46.999 "peer_address": { 00:21:46.999 "trtype": "TCP", 00:21:46.999 "adrfam": "IPv4", 00:21:46.999 "traddr": "10.0.0.1", 00:21:46.999 "trsvcid": "46010" 00:21:46.999 }, 00:21:46.999 "auth": { 00:21:46.999 "state": "completed", 00:21:46.999 "digest": "sha512", 00:21:46.999 "dhgroup": "ffdhe8192" 00:21:46.999 } 00:21:46.999 } 00:21:46.999 ]' 00:21:46.999 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.999 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.999 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.999 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:46.999 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.999 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.999 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.999 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.256 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:21:47.256 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: --dhchap-ctrl-secret DHHC-1:02:YmI5MWJkMzg5NDNhNDU4NDFmZmNjNGMyYWVlOGUzNmU1NGI5NjFlNjIwODNmNjBilpVIoA==: 00:21:48.186 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.186 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.186 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.186 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.186 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.186 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.186 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.187 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.443 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:48.443 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.443 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.443 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:48.443 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:48.443 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.443 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.443 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.443 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.443 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.443 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.443 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.443 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.375 00:21:49.375 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.375 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.375 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.633 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.633 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.633 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.633 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.633 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.633 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.633 { 00:21:49.633 "cntlid": 141, 00:21:49.633 "qid": 0, 00:21:49.633 "state": "enabled", 00:21:49.633 "thread": "nvmf_tgt_poll_group_000", 00:21:49.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:49.633 "listen_address": { 00:21:49.633 "trtype": "TCP", 00:21:49.633 "adrfam": "IPv4", 00:21:49.633 "traddr": "10.0.0.2", 00:21:49.633 "trsvcid": "4420" 00:21:49.633 }, 00:21:49.633 "peer_address": { 00:21:49.633 "trtype": "TCP", 00:21:49.633 "adrfam": "IPv4", 00:21:49.633 "traddr": "10.0.0.1", 00:21:49.633 "trsvcid": "46034" 00:21:49.633 }, 00:21:49.633 "auth": { 00:21:49.633 "state": "completed", 00:21:49.633 "digest": "sha512", 00:21:49.633 "dhgroup": "ffdhe8192" 00:21:49.633 } 00:21:49.633 } 00:21:49.633 ]' 00:21:49.633 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.633 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.633 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.633 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:49.633 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.891 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.891 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.891 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.149 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:21:50.149 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:01:NTNhODc2NDg1OWU1ZjVmYzg2OWQwNWQ0YjFjMjdlY2MRqlQy: 00:21:51.081 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.081 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.081 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.081 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.081 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.081 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.081 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.081 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.338 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:51.338 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.338 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.339 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:51.339 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:51.339 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.339 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:51.339 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.339 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.339 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.339 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:51.339 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:51.339 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.270 00:21:52.270 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.270 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.270 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.528 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.528 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.528 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.528 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.528 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.528 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.528 { 00:21:52.528 "cntlid": 143, 00:21:52.528 "qid": 0, 00:21:52.528 "state": "enabled", 00:21:52.528 "thread": "nvmf_tgt_poll_group_000", 00:21:52.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:52.528 "listen_address": { 00:21:52.528 "trtype": "TCP", 00:21:52.528 "adrfam": "IPv4", 00:21:52.528 "traddr": "10.0.0.2", 00:21:52.528 "trsvcid": "4420" 00:21:52.528 }, 00:21:52.528 "peer_address": { 00:21:52.528 "trtype": "TCP", 00:21:52.528 "adrfam": "IPv4", 00:21:52.528 "traddr": "10.0.0.1", 00:21:52.528 "trsvcid": "46062" 00:21:52.528 }, 00:21:52.528 "auth": { 00:21:52.528 "state": "completed", 00:21:52.528 "digest": "sha512", 00:21:52.528 "dhgroup": "ffdhe8192" 00:21:52.528 } 00:21:52.528 } 00:21:52.528 ]' 00:21:52.528 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.528 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.528 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.786 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:52.786 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.787 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.787 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.787 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.045 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:21:53.045 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:21:53.979 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.979 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.979 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.979 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.979 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.979 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:53.979 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:53.979 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:53.979 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:53.979 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:53.979 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:54.237 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:54.237 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.237 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:54.237 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:54.237 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:54.237 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.237 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.237 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.237 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.495 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.495 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.495 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.495 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.483 00:21:55.483 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.483 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.483 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.483 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.483 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.483 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.483 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.483 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.483 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.483 { 00:21:55.483 "cntlid": 145, 00:21:55.483 "qid": 0, 00:21:55.483 "state": "enabled", 00:21:55.483 "thread": "nvmf_tgt_poll_group_000", 00:21:55.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:55.483 "listen_address": { 00:21:55.483 "trtype": "TCP", 00:21:55.483 "adrfam": "IPv4", 00:21:55.483 "traddr": "10.0.0.2", 00:21:55.483 "trsvcid": "4420" 00:21:55.483 }, 00:21:55.483 "peer_address": { 00:21:55.483 "trtype": "TCP", 00:21:55.483 "adrfam": "IPv4", 00:21:55.483 "traddr": "10.0.0.1", 00:21:55.483 "trsvcid": "46078" 00:21:55.483 }, 00:21:55.483 "auth": { 00:21:55.483 "state": "completed", 00:21:55.483 "digest": "sha512", 00:21:55.483 "dhgroup": "ffdhe8192" 00:21:55.483 } 00:21:55.483 } 00:21:55.483 ]' 00:21:55.483 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.754 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.754 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.754 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:55.754 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.754 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.754 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.754 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.012 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:21:56.012 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NjFiMDc5M2ZjOTkyMDFiNzNhNTQ5MTZlNWE0NTMxZTI1NmNhOWU3NjYwZGE3ZjJikOVQ/Q==: --dhchap-ctrl-secret DHHC-1:03:NWQ3ZGVjNjI2OTc3Yzc4ZGNmM2JiYWM5ZWJkYzkwMTBlNWYyNWUzYWQyYmExOGIyYzNhYTZiOGQ5M2IxNTkxMzBTGm8=: 00:21:56.949 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.949 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.949 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.949 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.949 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.949 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:56.949 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.949 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.949 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.949 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:56.949 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:56.949 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:56.949 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:56.949 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:56.949 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:56.949 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:56.949 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:56.949 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:56.949 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:57.883 request: 00:21:57.883 { 00:21:57.883 "name": "nvme0", 00:21:57.883 "trtype": "tcp", 00:21:57.883 "traddr": "10.0.0.2", 00:21:57.883 "adrfam": "ipv4", 00:21:57.883 "trsvcid": "4420", 00:21:57.883 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:57.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:57.883 "prchk_reftag": false, 00:21:57.883 "prchk_guard": false, 00:21:57.883 "hdgst": false, 00:21:57.883 "ddgst": false, 00:21:57.883 "dhchap_key": "key2", 00:21:57.883 "allow_unrecognized_csi": false, 00:21:57.883 "method": "bdev_nvme_attach_controller", 00:21:57.883 "req_id": 1 00:21:57.883 } 00:21:57.883 Got JSON-RPC error response 00:21:57.883 response: 00:21:57.883 { 00:21:57.883 "code": -5, 00:21:57.883 "message": "Input/output error" 00:21:57.883 } 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:57.883 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:58.817 request: 00:21:58.817 { 00:21:58.817 "name": "nvme0", 00:21:58.817 "trtype": "tcp", 00:21:58.817 "traddr": "10.0.0.2", 00:21:58.817 "adrfam": "ipv4", 00:21:58.817 "trsvcid": "4420", 00:21:58.817 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:58.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:58.817 "prchk_reftag": false, 00:21:58.817 "prchk_guard": false, 00:21:58.817 "hdgst": false, 00:21:58.817 "ddgst": false, 00:21:58.817 "dhchap_key": "key1", 00:21:58.817 "dhchap_ctrlr_key": "ckey2", 00:21:58.817 "allow_unrecognized_csi": false, 00:21:58.817 "method": "bdev_nvme_attach_controller", 00:21:58.817 "req_id": 1 00:21:58.817 } 00:21:58.817 Got JSON-RPC error response 00:21:58.817 response: 00:21:58.817 { 00:21:58.817 "code": -5, 00:21:58.817 "message": "Input/output error" 00:21:58.817 } 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.817 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.751 request: 00:21:59.751 { 00:21:59.751 "name": "nvme0", 00:21:59.751 "trtype": "tcp", 00:21:59.751 "traddr": "10.0.0.2", 00:21:59.751 "adrfam": "ipv4", 00:21:59.751 "trsvcid": "4420", 00:21:59.751 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:59.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:59.751 "prchk_reftag": false, 00:21:59.751 "prchk_guard": false, 00:21:59.751 "hdgst": false, 00:21:59.751 "ddgst": false, 00:21:59.751 "dhchap_key": "key1", 00:21:59.751 "dhchap_ctrlr_key": "ckey1", 00:21:59.751 "allow_unrecognized_csi": false, 00:21:59.751 "method": "bdev_nvme_attach_controller", 00:21:59.751 "req_id": 1 00:21:59.751 } 00:21:59.751 Got JSON-RPC error response 00:21:59.751 response: 00:21:59.751 { 00:21:59.751 "code": -5, 00:21:59.751 "message": "Input/output error" 00:21:59.751 } 00:21:59.751 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:59.751 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:59.751 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:59.751 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:59.751 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.751 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.751 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.751 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.751 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2991736 00:21:59.751 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2991736 ']' 00:21:59.751 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2991736 00:21:59.751 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:59.752 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.752 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2991736 00:21:59.752 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:59.752 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:59.752 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2991736' 00:21:59.752 killing process with pid 2991736 00:21:59.752 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2991736 00:21:59.752 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2991736 00:22:01.126 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:01.126 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:01.126 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:01.126 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.126 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3015213 00:22:01.126 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:01.126 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3015213 00:22:01.126 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3015213 ']' 00:22:01.126 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.126 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.126 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.126 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.126 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.060 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.060 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:02.060 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:02.060 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:02.060 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.060 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.060 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:02.060 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3015213 00:22:02.060 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3015213 ']' 00:22:02.060 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.060 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.060 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.060 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.061 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.319 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.319 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:02.319 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:02.319 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.319 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.886 null0 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oj4 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.YrW ]] 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YrW 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.JbH 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.9QI ]] 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9QI 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.yXV 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.kkZ ]] 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kkZ 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.IWH 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.886 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:02.887 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.887 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:04.272 nvme0n1 00:22:04.273 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.273 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.273 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.531 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.531 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.531 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.531 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.531 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.531 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.531 { 00:22:04.531 "cntlid": 1, 00:22:04.531 "qid": 0, 00:22:04.531 "state": "enabled", 00:22:04.531 "thread": "nvmf_tgt_poll_group_000", 00:22:04.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:04.531 "listen_address": { 00:22:04.531 "trtype": "TCP", 00:22:04.531 "adrfam": "IPv4", 00:22:04.531 "traddr": "10.0.0.2", 00:22:04.531 "trsvcid": "4420" 00:22:04.531 }, 00:22:04.531 "peer_address": { 00:22:04.531 "trtype": "TCP", 00:22:04.531 "adrfam": "IPv4", 00:22:04.531 "traddr": "10.0.0.1", 00:22:04.531 "trsvcid": "45836" 00:22:04.531 }, 00:22:04.531 "auth": { 00:22:04.531 "state": "completed", 00:22:04.531 "digest": "sha512", 00:22:04.531 "dhgroup": "ffdhe8192" 00:22:04.531 } 00:22:04.531 } 00:22:04.531 ]' 00:22:04.531 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.531 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.531 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.531 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:04.531 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.789 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.789 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.789 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.047 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:22:05.047 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:22:05.982 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.982 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.982 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.982 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.982 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.982 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:05.982 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.982 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.982 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.982 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:05.982 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:06.240 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:06.240 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:06.240 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:06.240 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:06.240 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:06.240 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:06.240 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:06.240 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:06.240 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.240 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.500 request: 00:22:06.500 { 00:22:06.500 "name": "nvme0", 00:22:06.500 "trtype": "tcp", 00:22:06.500 "traddr": "10.0.0.2", 00:22:06.500 "adrfam": "ipv4", 00:22:06.500 "trsvcid": "4420", 00:22:06.500 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:06.500 "prchk_reftag": false, 00:22:06.500 "prchk_guard": false, 00:22:06.500 "hdgst": false, 00:22:06.500 "ddgst": false, 00:22:06.500 "dhchap_key": "key3", 00:22:06.500 "allow_unrecognized_csi": false, 00:22:06.500 "method": "bdev_nvme_attach_controller", 00:22:06.500 "req_id": 1 00:22:06.500 } 00:22:06.500 Got JSON-RPC error response 00:22:06.500 response: 00:22:06.500 { 00:22:06.500 "code": -5, 00:22:06.500 "message": "Input/output error" 00:22:06.500 } 00:22:06.500 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:06.500 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:06.500 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:06.500 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:06.500 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:06.500 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:06.500 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:06.500 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:06.758 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:06.758 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:06.758 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:06.758 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:06.758 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:06.758 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:06.758 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:06.758 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:06.758 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.758 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:07.016 request: 00:22:07.016 { 00:22:07.016 "name": "nvme0", 00:22:07.016 "trtype": "tcp", 00:22:07.016 "traddr": "10.0.0.2", 00:22:07.016 "adrfam": "ipv4", 00:22:07.016 "trsvcid": "4420", 00:22:07.016 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:07.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:07.016 "prchk_reftag": false, 00:22:07.016 "prchk_guard": false, 00:22:07.016 "hdgst": false, 00:22:07.016 "ddgst": false, 00:22:07.016 "dhchap_key": "key3", 00:22:07.016 "allow_unrecognized_csi": false, 00:22:07.016 "method": "bdev_nvme_attach_controller", 00:22:07.016 "req_id": 1 00:22:07.016 } 00:22:07.016 Got JSON-RPC error response 00:22:07.016 response: 00:22:07.016 { 00:22:07.016 "code": -5, 00:22:07.016 "message": "Input/output error" 00:22:07.016 } 00:22:07.016 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:07.016 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:07.016 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:07.016 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:07.275 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:07.275 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:07.275 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:07.275 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:07.275 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:07.275 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:07.533 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.533 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.533 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.533 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.533 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.533 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.533 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.533 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.533 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:07.533 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:07.533 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:07.533 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:07.533 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:07.533 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:07.533 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:07.533 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:07.533 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:07.533 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:08.099 request: 00:22:08.099 { 00:22:08.099 "name": "nvme0", 00:22:08.099 "trtype": "tcp", 00:22:08.099 "traddr": "10.0.0.2", 00:22:08.099 "adrfam": "ipv4", 00:22:08.099 "trsvcid": "4420", 00:22:08.099 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:08.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:08.099 "prchk_reftag": false, 00:22:08.099 "prchk_guard": false, 00:22:08.099 "hdgst": false, 00:22:08.099 "ddgst": false, 00:22:08.099 "dhchap_key": "key0", 00:22:08.099 "dhchap_ctrlr_key": "key1", 00:22:08.099 "allow_unrecognized_csi": false, 00:22:08.099 "method": "bdev_nvme_attach_controller", 00:22:08.099 "req_id": 1 00:22:08.099 } 00:22:08.099 Got JSON-RPC error response 00:22:08.099 response: 00:22:08.099 { 00:22:08.099 "code": -5, 00:22:08.099 "message": "Input/output error" 00:22:08.099 } 00:22:08.099 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:08.099 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:08.099 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:08.099 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:08.099 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:08.099 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:08.099 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:08.357 nvme0n1 00:22:08.357 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:08.357 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:08.357 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.615 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.615 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.615 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.873 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:08.873 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.873 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.873 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.873 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:08.873 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:08.873 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:10.772 nvme0n1 00:22:10.772 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:10.772 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.772 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:10.772 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.772 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:10.772 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.772 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.772 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.772 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:10.772 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.772 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:11.030 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.030 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:22:11.030 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: --dhchap-ctrl-secret DHHC-1:03:NmY2YWZiNDhjNGFkM2QzNWU5OWVjMTIyMmQxZjU2MThjMmY5OWFhYmY5MzVjNmM5YTdmMDA0ZTAzZWM4ZTM5OZETm3A=: 00:22:11.963 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:11.963 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:11.963 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:11.963 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:11.963 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:11.963 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:11.963 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:11.963 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.963 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.222 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:12.222 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:12.222 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:12.222 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:12.222 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.222 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:12.222 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.222 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:12.222 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:12.222 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:13.156 request: 00:22:13.156 { 00:22:13.156 "name": "nvme0", 00:22:13.156 "trtype": "tcp", 00:22:13.156 "traddr": "10.0.0.2", 00:22:13.156 "adrfam": "ipv4", 00:22:13.156 "trsvcid": "4420", 00:22:13.156 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:13.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:13.156 "prchk_reftag": false, 00:22:13.156 "prchk_guard": false, 00:22:13.156 "hdgst": false, 00:22:13.156 "ddgst": false, 00:22:13.156 "dhchap_key": "key1", 00:22:13.156 "allow_unrecognized_csi": false, 00:22:13.156 "method": "bdev_nvme_attach_controller", 00:22:13.156 "req_id": 1 00:22:13.156 } 00:22:13.156 Got JSON-RPC error response 00:22:13.156 response: 00:22:13.156 { 00:22:13.156 "code": -5, 00:22:13.156 "message": "Input/output error" 00:22:13.156 } 00:22:13.156 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:13.156 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:13.156 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:13.156 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:13.156 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:13.156 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:13.156 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:14.528 nvme0n1 00:22:14.528 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:14.528 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:14.528 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.092 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.092 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.092 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.349 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.349 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.349 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.349 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.349 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:15.349 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:15.349 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:15.610 nvme0n1 00:22:15.610 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:15.611 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.611 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:15.872 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.872 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.872 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.129 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:16.129 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.129 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.129 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.129 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: '' 2s 00:22:16.129 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:16.129 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:16.129 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: 00:22:16.129 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:16.129 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:16.129 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:16.129 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: ]] 00:22:16.129 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YWU1YWFjYjAzODZhZjNiZDgyNGZhMTVkMzU1NWZlMDYIHN1N: 00:22:16.129 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:16.129 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:16.129 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:18.657 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:18.657 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:18.657 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:18.657 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:18.657 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:18.657 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:18.657 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:18.657 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:18.658 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.658 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.658 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.658 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: 2s 00:22:18.658 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:18.658 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:18.658 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:18.658 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: 00:22:18.658 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:18.658 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:18.658 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:18.658 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: ]] 00:22:18.658 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MGUzYjAxMzcxNTY0MGVlZDUwODQ0NDM5NjEyYmVlNTgxMjgwN2MyZDBkYmM4YmVilGe3gQ==: 00:22:18.658 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:18.658 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:20.558 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:20.558 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:20.558 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:20.558 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:20.558 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:20.558 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:20.558 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:20.558 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.558 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:20.558 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.558 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.558 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.558 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:20.558 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:20.558 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:21.933 nvme0n1 00:22:21.933 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:21.933 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.933 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.933 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.933 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:21.933 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:22.868 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:22.868 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:22.868 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.868 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.868 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.868 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.868 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.868 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.868 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:22.868 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:23.435 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:23.435 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:23.435 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.435 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.435 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:23.435 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.435 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.435 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.435 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:23.435 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:23.435 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:23.435 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:23.435 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.435 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:23.435 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:23.435 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:23.435 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:24.371 request: 00:22:24.371 { 00:22:24.371 "name": "nvme0", 00:22:24.371 "dhchap_key": "key1", 00:22:24.371 "dhchap_ctrlr_key": "key3", 00:22:24.371 "method": "bdev_nvme_set_keys", 00:22:24.371 "req_id": 1 00:22:24.371 } 00:22:24.371 Got JSON-RPC error response 00:22:24.371 response: 00:22:24.371 { 00:22:24.372 "code": -13, 00:22:24.372 "message": "Permission denied" 00:22:24.372 } 00:22:24.372 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:24.372 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:24.372 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:24.372 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:24.372 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:24.372 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.372 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:24.630 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:24.630 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:26.011 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:26.011 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:26.011 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.011 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:26.011 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:26.011 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.011 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.011 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.011 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:26.011 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:26.011 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:27.385 nvme0n1 00:22:27.644 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:27.644 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.644 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.644 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.644 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:27.644 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:27.644 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:27.644 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:27.644 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:27.644 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:27.644 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:27.644 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:27.644 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:28.577 request: 00:22:28.577 { 00:22:28.577 "name": "nvme0", 00:22:28.577 "dhchap_key": "key2", 00:22:28.577 "dhchap_ctrlr_key": "key0", 00:22:28.577 "method": "bdev_nvme_set_keys", 00:22:28.577 "req_id": 1 00:22:28.577 } 00:22:28.577 Got JSON-RPC error response 00:22:28.577 response: 00:22:28.577 { 00:22:28.577 "code": -13, 00:22:28.577 "message": "Permission denied" 00:22:28.577 } 00:22:28.577 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:28.577 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:28.577 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:28.577 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:28.577 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:28.577 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.577 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:28.836 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:28.836 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:29.868 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:29.868 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:29.868 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.126 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:30.126 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:31.059 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:31.059 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:31.059 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.317 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:31.317 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:31.317 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:31.317 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2991890 00:22:31.317 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2991890 ']' 00:22:31.317 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2991890 00:22:31.317 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:31.317 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.317 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2991890 00:22:31.317 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:31.317 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:31.317 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2991890' 00:22:31.317 killing process with pid 2991890 00:22:31.317 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2991890 00:22:31.317 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2991890 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:33.847 rmmod nvme_tcp 00:22:33.847 rmmod nvme_fabrics 00:22:33.847 rmmod nvme_keyring 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3015213 ']' 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3015213 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3015213 ']' 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3015213 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3015213 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3015213' 00:22:33.847 killing process with pid 3015213 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3015213 00:22:33.847 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3015213 00:22:34.781 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:34.781 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:34.781 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:34.781 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:34.781 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:34.781 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:34.781 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:34.781 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:34.781 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:34.781 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.781 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.781 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.oj4 /tmp/spdk.key-sha256.JbH /tmp/spdk.key-sha384.yXV /tmp/spdk.key-sha512.IWH /tmp/spdk.key-sha512.YrW /tmp/spdk.key-sha384.9QI /tmp/spdk.key-sha256.kkZ '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:37.317 00:22:37.317 real 3m46.862s 00:22:37.317 user 8m46.199s 00:22:37.317 sys 0m27.489s 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.317 ************************************ 00:22:37.317 END TEST nvmf_auth_target 00:22:37.317 ************************************ 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:37.317 ************************************ 00:22:37.317 START TEST nvmf_bdevio_no_huge 00:22:37.317 ************************************ 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:37.317 * Looking for test storage... 00:22:37.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:37.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.317 --rc genhtml_branch_coverage=1 00:22:37.317 --rc genhtml_function_coverage=1 00:22:37.317 --rc genhtml_legend=1 00:22:37.317 --rc geninfo_all_blocks=1 00:22:37.317 --rc geninfo_unexecuted_blocks=1 00:22:37.317 00:22:37.317 ' 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:37.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.317 --rc genhtml_branch_coverage=1 00:22:37.317 --rc genhtml_function_coverage=1 00:22:37.317 --rc genhtml_legend=1 00:22:37.317 --rc geninfo_all_blocks=1 00:22:37.317 --rc geninfo_unexecuted_blocks=1 00:22:37.317 00:22:37.317 ' 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:37.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.317 --rc genhtml_branch_coverage=1 00:22:37.317 --rc genhtml_function_coverage=1 00:22:37.317 --rc genhtml_legend=1 00:22:37.317 --rc geninfo_all_blocks=1 00:22:37.317 --rc geninfo_unexecuted_blocks=1 00:22:37.317 00:22:37.317 ' 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:37.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.317 --rc genhtml_branch_coverage=1 00:22:37.317 --rc genhtml_function_coverage=1 00:22:37.317 --rc genhtml_legend=1 00:22:37.317 --rc geninfo_all_blocks=1 00:22:37.317 --rc geninfo_unexecuted_blocks=1 00:22:37.317 00:22:37.317 ' 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.317 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:37.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:37.318 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:39.222 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:39.222 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.222 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:39.223 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:39.223 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:39.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:22:39.223 00:22:39.223 --- 10.0.0.2 ping statistics --- 00:22:39.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.223 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:22:39.223 00:22:39.223 --- 10.0.0.1 ping statistics --- 00:22:39.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.223 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3021862 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3021862 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3021862 ']' 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.223 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:39.482 [2024-11-19 21:12:13.066864] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:22:39.482 [2024-11-19 21:12:13.067031] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:39.482 [2024-11-19 21:12:13.260019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:39.740 [2024-11-19 21:12:13.416727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.740 [2024-11-19 21:12:13.416823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.740 [2024-11-19 21:12:13.416848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.740 [2024-11-19 21:12:13.416872] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.740 [2024-11-19 21:12:13.416892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.740 [2024-11-19 21:12:13.419049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:39.740 [2024-11-19 21:12:13.419116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:39.740 [2024-11-19 21:12:13.419171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.740 [2024-11-19 21:12:13.419178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:40.306 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.306 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:40.307 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:40.307 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.307 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.565 [2024-11-19 21:12:14.108232] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.565 Malloc0 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.565 [2024-11-19 21:12:14.197433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.565 { 00:22:40.565 "params": { 00:22:40.565 "name": "Nvme$subsystem", 00:22:40.565 "trtype": "$TEST_TRANSPORT", 00:22:40.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.565 "adrfam": "ipv4", 00:22:40.565 "trsvcid": "$NVMF_PORT", 00:22:40.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.565 "hdgst": ${hdgst:-false}, 00:22:40.565 "ddgst": ${ddgst:-false} 00:22:40.565 }, 00:22:40.565 "method": "bdev_nvme_attach_controller" 00:22:40.565 } 00:22:40.565 EOF 00:22:40.565 )") 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:40.565 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:40.565 "params": { 00:22:40.565 "name": "Nvme1", 00:22:40.565 "trtype": "tcp", 00:22:40.565 "traddr": "10.0.0.2", 00:22:40.565 "adrfam": "ipv4", 00:22:40.565 "trsvcid": "4420", 00:22:40.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.565 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:40.565 "hdgst": false, 00:22:40.565 "ddgst": false 00:22:40.565 }, 00:22:40.565 "method": "bdev_nvme_attach_controller" 00:22:40.565 }' 00:22:40.565 [2024-11-19 21:12:14.281866] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:22:40.565 [2024-11-19 21:12:14.282013] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3022018 ] 00:22:40.823 [2024-11-19 21:12:14.446608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:40.823 [2024-11-19 21:12:14.587847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.823 [2024-11-19 21:12:14.587890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.823 [2024-11-19 21:12:14.587899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.388 I/O targets: 00:22:41.388 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:41.388 00:22:41.388 00:22:41.388 CUnit - A unit testing framework for C - Version 2.1-3 00:22:41.388 http://cunit.sourceforge.net/ 00:22:41.388 00:22:41.388 00:22:41.388 Suite: bdevio tests on: Nvme1n1 00:22:41.646 Test: blockdev write read block ...passed 00:22:41.646 Test: blockdev write zeroes read block ...passed 00:22:41.646 Test: blockdev write zeroes read no split ...passed 00:22:41.646 Test: blockdev write zeroes read split ...passed 00:22:41.646 Test: blockdev write zeroes read split partial ...passed 00:22:41.646 Test: blockdev reset ...[2024-11-19 21:12:15.401240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:41.646 [2024-11-19 21:12:15.401445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f1100 (9): Bad file descriptor 00:22:41.646 [2024-11-19 21:12:15.431905] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:41.646 passed 00:22:41.903 Test: blockdev write read 8 blocks ...passed 00:22:41.903 Test: blockdev write read size > 128k ...passed 00:22:41.903 Test: blockdev write read invalid size ...passed 00:22:41.903 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:41.903 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:41.903 Test: blockdev write read max offset ...passed 00:22:41.903 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:41.903 Test: blockdev writev readv 8 blocks ...passed 00:22:41.903 Test: blockdev writev readv 30 x 1block ...passed 00:22:41.903 Test: blockdev writev readv block ...passed 00:22:41.903 Test: blockdev writev readv size > 128k ...passed 00:22:41.903 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:41.903 Test: blockdev comparev and writev ...[2024-11-19 21:12:15.693347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.903 [2024-11-19 21:12:15.693426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.903 [2024-11-19 21:12:15.693467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.903 [2024-11-19 21:12:15.693495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:41.903 [2024-11-19 21:12:15.693952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.903 [2024-11-19 21:12:15.693988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:41.903 [2024-11-19 21:12:15.694024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.903 [2024-11-19 21:12:15.694050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:41.903 [2024-11-19 21:12:15.694485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.903 [2024-11-19 21:12:15.694525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:41.903 [2024-11-19 21:12:15.694561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.903 [2024-11-19 21:12:15.694587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:41.903 [2024-11-19 21:12:15.695047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.903 [2024-11-19 21:12:15.695088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:41.903 [2024-11-19 21:12:15.695130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.903 [2024-11-19 21:12:15.695156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:42.161 passed 00:22:42.161 Test: blockdev nvme passthru rw ...passed 00:22:42.161 Test: blockdev nvme passthru vendor specific ...[2024-11-19 21:12:15.778484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:42.161 [2024-11-19 21:12:15.778543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:42.161 [2024-11-19 21:12:15.778782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:42.161 [2024-11-19 21:12:15.778816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:42.161 [2024-11-19 21:12:15.779009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:42.161 [2024-11-19 21:12:15.779041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:42.161 [2024-11-19 21:12:15.779245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:42.161 [2024-11-19 21:12:15.779278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:42.161 passed 00:22:42.161 Test: blockdev nvme admin passthru ...passed 00:22:42.161 Test: blockdev copy ...passed 00:22:42.161 00:22:42.161 Run Summary: Type Total Ran Passed Failed Inactive 00:22:42.161 suites 1 1 n/a 0 0 00:22:42.161 tests 23 23 23 0 0 00:22:42.161 asserts 152 152 152 0 n/a 00:22:42.161 00:22:42.161 Elapsed time = 1.342 seconds 00:22:42.727 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:42.727 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.727 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:42.985 rmmod nvme_tcp 00:22:42.985 rmmod nvme_fabrics 00:22:42.985 rmmod nvme_keyring 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3021862 ']' 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3021862 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3021862 ']' 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3021862 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3021862 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:42.985 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3021862' 00:22:42.985 killing process with pid 3021862 00:22:42.986 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3021862 00:22:42.986 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3021862 00:22:43.920 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:43.920 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:43.920 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:43.920 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:43.920 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:43.920 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:43.920 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:43.920 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:43.920 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:43.920 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.920 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.920 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.820 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:45.820 00:22:45.820 real 0m8.830s 00:22:45.820 user 0m20.695s 00:22:45.820 sys 0m2.926s 00:22:45.820 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.820 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:45.820 ************************************ 00:22:45.820 END TEST nvmf_bdevio_no_huge 00:22:45.820 ************************************ 00:22:45.820 21:12:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:45.820 21:12:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:45.820 21:12:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:45.820 21:12:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:45.820 ************************************ 00:22:45.820 START TEST nvmf_tls 00:22:45.820 ************************************ 00:22:45.821 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:45.821 * Looking for test storage... 00:22:45.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:45.821 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:45.821 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:22:45.821 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:46.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.080 --rc genhtml_branch_coverage=1 00:22:46.080 --rc genhtml_function_coverage=1 00:22:46.080 --rc genhtml_legend=1 00:22:46.080 --rc geninfo_all_blocks=1 00:22:46.080 --rc geninfo_unexecuted_blocks=1 00:22:46.080 00:22:46.080 ' 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:46.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.080 --rc genhtml_branch_coverage=1 00:22:46.080 --rc genhtml_function_coverage=1 00:22:46.080 --rc genhtml_legend=1 00:22:46.080 --rc geninfo_all_blocks=1 00:22:46.080 --rc geninfo_unexecuted_blocks=1 00:22:46.080 00:22:46.080 ' 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:46.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.080 --rc genhtml_branch_coverage=1 00:22:46.080 --rc genhtml_function_coverage=1 00:22:46.080 --rc genhtml_legend=1 00:22:46.080 --rc geninfo_all_blocks=1 00:22:46.080 --rc geninfo_unexecuted_blocks=1 00:22:46.080 00:22:46.080 ' 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:46.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.080 --rc genhtml_branch_coverage=1 00:22:46.080 --rc genhtml_function_coverage=1 00:22:46.080 --rc genhtml_legend=1 00:22:46.080 --rc geninfo_all_blocks=1 00:22:46.080 --rc geninfo_unexecuted_blocks=1 00:22:46.080 00:22:46.080 ' 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.080 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:46.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:46.081 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.982 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:47.982 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:47.982 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:47.982 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:47.982 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:47.982 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:47.982 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:47.982 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:47.982 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:47.982 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:47.982 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:47.982 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:47.982 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:47.982 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:47.983 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:47.983 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:47.983 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:47.983 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:47.983 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:47.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:22:47.983 00:22:47.983 --- 10.0.0.2 ping statistics --- 00:22:47.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.984 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:47.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:22:47.984 00:22:47.984 --- 10.0.0.1 ping statistics --- 00:22:47.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.984 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3024350 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3024350 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3024350 ']' 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:47.984 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.242 [2024-11-19 21:12:21.804307] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:22:48.242 [2024-11-19 21:12:21.804476] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.242 [2024-11-19 21:12:21.948174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.501 [2024-11-19 21:12:22.080563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.501 [2024-11-19 21:12:22.080653] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.501 [2024-11-19 21:12:22.080679] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.501 [2024-11-19 21:12:22.080704] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.501 [2024-11-19 21:12:22.080724] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.501 [2024-11-19 21:12:22.082383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.066 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.066 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:49.066 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:49.067 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:49.067 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.067 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.067 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:49.067 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:49.633 true 00:22:49.633 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:49.633 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:49.633 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:49.633 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:49.633 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:50.198 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:50.198 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:50.198 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:50.198 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:50.198 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:50.456 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:50.456 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:50.714 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:50.714 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:50.714 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:50.714 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:51.280 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:51.280 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:51.281 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:51.281 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:51.281 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:51.539 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:51.539 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:51.539 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:51.796 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:51.796 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.yPzNOzrgnR 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.JQITxZD7mL 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.yPzNOzrgnR 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.JQITxZD7mL 00:22:52.363 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:52.621 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:53.187 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.yPzNOzrgnR 00:22:53.187 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yPzNOzrgnR 00:22:53.187 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:53.445 [2024-11-19 21:12:27.099733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.445 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:53.703 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:53.961 [2024-11-19 21:12:27.653320] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:53.961 [2024-11-19 21:12:27.653722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.961 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:54.218 malloc0 00:22:54.218 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:54.478 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yPzNOzrgnR 00:22:54.772 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:55.030 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.yPzNOzrgnR 00:23:07.244 Initializing NVMe Controllers 00:23:07.244 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:07.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:07.244 Initialization complete. Launching workers. 00:23:07.244 ======================================================== 00:23:07.244 Latency(us) 00:23:07.244 Device Information : IOPS MiB/s Average min max 00:23:07.244 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5814.39 22.71 11012.04 2090.41 13171.09 00:23:07.244 ======================================================== 00:23:07.244 Total : 5814.39 22.71 11012.04 2090.41 13171.09 00:23:07.244 00:23:07.244 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yPzNOzrgnR 00:23:07.244 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:07.244 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:07.244 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:07.244 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yPzNOzrgnR 00:23:07.244 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.244 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3026384 00:23:07.244 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:07.244 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.244 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3026384 /var/tmp/bdevperf.sock 00:23:07.244 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3026384 ']' 00:23:07.244 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.244 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.244 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.244 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.244 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.244 [2024-11-19 21:12:39.116492] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:23:07.244 [2024-11-19 21:12:39.116625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3026384 ] 00:23:07.244 [2024-11-19 21:12:39.253810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.244 [2024-11-19 21:12:39.372670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.244 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.244 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:07.244 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yPzNOzrgnR 00:23:07.244 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:07.244 [2024-11-19 21:12:40.630833] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.244 TLSTESTn1 00:23:07.245 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:07.245 Running I/O for 10 seconds... 00:23:09.120 2062.00 IOPS, 8.05 MiB/s [2024-11-19T20:12:43.856Z] 2191.00 IOPS, 8.56 MiB/s [2024-11-19T20:12:45.237Z] 2272.33 IOPS, 8.88 MiB/s [2024-11-19T20:12:46.175Z] 2326.50 IOPS, 9.09 MiB/s [2024-11-19T20:12:47.113Z] 2338.20 IOPS, 9.13 MiB/s [2024-11-19T20:12:48.053Z] 2361.33 IOPS, 9.22 MiB/s [2024-11-19T20:12:48.991Z] 2363.86 IOPS, 9.23 MiB/s [2024-11-19T20:12:49.927Z] 2379.12 IOPS, 9.29 MiB/s [2024-11-19T20:12:50.866Z] 2391.89 IOPS, 9.34 MiB/s [2024-11-19T20:12:51.126Z] 2378.30 IOPS, 9.29 MiB/s 00:23:17.331 Latency(us) 00:23:17.331 [2024-11-19T20:12:51.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.332 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:17.332 Verification LBA range: start 0x0 length 0x2000 00:23:17.332 TLSTESTn1 : 10.04 2381.19 9.30 0.00 0.00 53628.14 11359.57 80002.47 00:23:17.332 [2024-11-19T20:12:51.127Z] =================================================================================================================== 00:23:17.332 [2024-11-19T20:12:51.127Z] Total : 2381.19 9.30 0.00 0.00 53628.14 11359.57 80002.47 00:23:17.332 { 00:23:17.332 "results": [ 00:23:17.332 { 00:23:17.332 "job": "TLSTESTn1", 00:23:17.332 "core_mask": "0x4", 00:23:17.332 "workload": "verify", 00:23:17.332 "status": "finished", 00:23:17.332 "verify_range": { 00:23:17.332 "start": 0, 00:23:17.332 "length": 8192 00:23:17.332 }, 00:23:17.332 "queue_depth": 128, 00:23:17.332 "io_size": 4096, 00:23:17.332 "runtime": 10.040789, 00:23:17.332 "iops": 2381.1873748168596, 00:23:17.332 "mibps": 9.301513182878358, 00:23:17.332 "io_failed": 0, 00:23:17.332 "io_timeout": 0, 00:23:17.332 "avg_latency_us": 53628.14160134956, 00:23:17.332 "min_latency_us": 11359.573333333334, 00:23:17.332 "max_latency_us": 80002.46518518518 00:23:17.332 } 00:23:17.332 ], 00:23:17.332 "core_count": 1 00:23:17.332 } 00:23:17.332 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:17.332 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3026384 00:23:17.332 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3026384 ']' 00:23:17.332 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3026384 00:23:17.332 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:17.332 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.332 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3026384 00:23:17.332 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:17.332 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:17.332 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3026384' 00:23:17.332 killing process with pid 3026384 00:23:17.332 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3026384 00:23:17.332 Received shutdown signal, test time was about 10.000000 seconds 00:23:17.332 00:23:17.332 Latency(us) 00:23:17.332 [2024-11-19T20:12:51.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.332 [2024-11-19T20:12:51.127Z] =================================================================================================================== 00:23:17.332 [2024-11-19T20:12:51.127Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:17.332 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3026384 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JQITxZD7mL 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JQITxZD7mL 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JQITxZD7mL 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JQITxZD7mL 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3027851 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3027851 /var/tmp/bdevperf.sock 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3027851 ']' 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:18.272 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.272 [2024-11-19 21:12:51.867705] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:23:18.272 [2024-11-19 21:12:51.867856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027851 ] 00:23:18.272 [2024-11-19 21:12:52.000346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.532 [2024-11-19 21:12:52.119983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.099 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:19.099 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:19.099 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JQITxZD7mL 00:23:19.357 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:19.615 [2024-11-19 21:12:53.338321] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.615 [2024-11-19 21:12:53.348392] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:19.615 [2024-11-19 21:12:53.349170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:19.615 [2024-11-19 21:12:53.350146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:19.615 [2024-11-19 21:12:53.351138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:19.615 [2024-11-19 21:12:53.351180] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:19.615 [2024-11-19 21:12:53.351204] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:19.615 [2024-11-19 21:12:53.351235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:19.615 request: 00:23:19.615 { 00:23:19.615 "name": "TLSTEST", 00:23:19.615 "trtype": "tcp", 00:23:19.615 "traddr": "10.0.0.2", 00:23:19.615 "adrfam": "ipv4", 00:23:19.615 "trsvcid": "4420", 00:23:19.615 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.615 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:19.615 "prchk_reftag": false, 00:23:19.615 "prchk_guard": false, 00:23:19.615 "hdgst": false, 00:23:19.615 "ddgst": false, 00:23:19.615 "psk": "key0", 00:23:19.615 "allow_unrecognized_csi": false, 00:23:19.615 "method": "bdev_nvme_attach_controller", 00:23:19.615 "req_id": 1 00:23:19.615 } 00:23:19.615 Got JSON-RPC error response 00:23:19.615 response: 00:23:19.615 { 00:23:19.615 "code": -5, 00:23:19.615 "message": "Input/output error" 00:23:19.615 } 00:23:19.615 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3027851 00:23:19.615 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3027851 ']' 00:23:19.615 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3027851 00:23:19.615 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:19.615 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:19.615 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3027851 00:23:19.874 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:19.874 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:19.874 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3027851' 00:23:19.874 killing process with pid 3027851 00:23:19.874 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3027851 00:23:19.874 Received shutdown signal, test time was about 10.000000 seconds 00:23:19.874 00:23:19.874 Latency(us) 00:23:19.874 [2024-11-19T20:12:53.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.874 [2024-11-19T20:12:53.669Z] =================================================================================================================== 00:23:19.874 [2024-11-19T20:12:53.669Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:19.874 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3027851 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yPzNOzrgnR 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yPzNOzrgnR 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yPzNOzrgnR 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yPzNOzrgnR 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3028123 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3028123 /var/tmp/bdevperf.sock 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3028123 ']' 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.443 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.703 [2024-11-19 21:12:54.283564] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:23:20.703 [2024-11-19 21:12:54.283711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3028123 ] 00:23:20.703 [2024-11-19 21:12:54.417408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.963 [2024-11-19 21:12:54.543418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.530 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.530 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:21.530 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yPzNOzrgnR 00:23:22.097 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:22.356 [2024-11-19 21:12:55.899280] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:22.356 [2024-11-19 21:12:55.913331] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:22.356 [2024-11-19 21:12:55.913374] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:22.356 [2024-11-19 21:12:55.913457] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:22.356 [2024-11-19 21:12:55.914039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:22.356 [2024-11-19 21:12:55.915022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:22.356 [2024-11-19 21:12:55.916011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:22.356 [2024-11-19 21:12:55.916063] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:22.356 [2024-11-19 21:12:55.916110] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:22.356 [2024-11-19 21:12:55.916139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:22.356 request: 00:23:22.356 { 00:23:22.356 "name": "TLSTEST", 00:23:22.356 "trtype": "tcp", 00:23:22.356 "traddr": "10.0.0.2", 00:23:22.356 "adrfam": "ipv4", 00:23:22.356 "trsvcid": "4420", 00:23:22.356 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.356 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:22.356 "prchk_reftag": false, 00:23:22.356 "prchk_guard": false, 00:23:22.356 "hdgst": false, 00:23:22.356 "ddgst": false, 00:23:22.356 "psk": "key0", 00:23:22.356 "allow_unrecognized_csi": false, 00:23:22.356 "method": "bdev_nvme_attach_controller", 00:23:22.356 "req_id": 1 00:23:22.356 } 00:23:22.356 Got JSON-RPC error response 00:23:22.356 response: 00:23:22.356 { 00:23:22.356 "code": -5, 00:23:22.356 "message": "Input/output error" 00:23:22.356 } 00:23:22.356 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3028123 00:23:22.356 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3028123 ']' 00:23:22.356 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3028123 00:23:22.356 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:22.356 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.356 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3028123 00:23:22.356 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:22.356 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:22.356 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3028123' 00:23:22.356 killing process with pid 3028123 00:23:22.356 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3028123 00:23:22.356 Received shutdown signal, test time was about 10.000000 seconds 00:23:22.356 00:23:22.356 Latency(us) 00:23:22.356 [2024-11-19T20:12:56.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.356 [2024-11-19T20:12:56.151Z] =================================================================================================================== 00:23:22.356 [2024-11-19T20:12:56.151Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:22.356 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3028123 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yPzNOzrgnR 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yPzNOzrgnR 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yPzNOzrgnR 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yPzNOzrgnR 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3028405 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3028405 /var/tmp/bdevperf.sock 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3028405 ']' 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.290 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.291 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.291 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.291 [2024-11-19 21:12:56.820786] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:23:23.291 [2024-11-19 21:12:56.820924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3028405 ] 00:23:23.291 [2024-11-19 21:12:56.958096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.291 [2024-11-19 21:12:57.081066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.227 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.227 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:24.227 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yPzNOzrgnR 00:23:24.485 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:24.745 [2024-11-19 21:12:58.414220] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:24.745 [2024-11-19 21:12:58.423795] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:24.745 [2024-11-19 21:12:58.423837] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:24.745 [2024-11-19 21:12:58.423893] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:24.745 [2024-11-19 21:12:58.423955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:24.745 [2024-11-19 21:12:58.424927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:24.745 [2024-11-19 21:12:58.425925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:24.745 [2024-11-19 21:12:58.425956] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:24.745 [2024-11-19 21:12:58.425982] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:24.745 [2024-11-19 21:12:58.426008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:24.745 request: 00:23:24.745 { 00:23:24.745 "name": "TLSTEST", 00:23:24.745 "trtype": "tcp", 00:23:24.745 "traddr": "10.0.0.2", 00:23:24.745 "adrfam": "ipv4", 00:23:24.745 "trsvcid": "4420", 00:23:24.745 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:24.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:24.745 "prchk_reftag": false, 00:23:24.745 "prchk_guard": false, 00:23:24.745 "hdgst": false, 00:23:24.745 "ddgst": false, 00:23:24.745 "psk": "key0", 00:23:24.745 "allow_unrecognized_csi": false, 00:23:24.745 "method": "bdev_nvme_attach_controller", 00:23:24.745 "req_id": 1 00:23:24.745 } 00:23:24.745 Got JSON-RPC error response 00:23:24.745 response: 00:23:24.745 { 00:23:24.745 "code": -5, 00:23:24.745 "message": "Input/output error" 00:23:24.745 } 00:23:24.745 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3028405 00:23:24.745 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3028405 ']' 00:23:24.745 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3028405 00:23:24.745 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:24.745 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.745 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3028405 00:23:24.745 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:24.745 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:24.745 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3028405' 00:23:24.745 killing process with pid 3028405 00:23:24.745 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3028405 00:23:24.745 Received shutdown signal, test time was about 10.000000 seconds 00:23:24.745 00:23:24.745 Latency(us) 00:23:24.745 [2024-11-19T20:12:58.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.745 [2024-11-19T20:12:58.540Z] =================================================================================================================== 00:23:24.745 [2024-11-19T20:12:58.540Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:24.745 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3028405 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3028682 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3028682 /var/tmp/bdevperf.sock 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3028682 ']' 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.684 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.684 [2024-11-19 21:12:59.365466] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:23:25.684 [2024-11-19 21:12:59.365609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3028682 ] 00:23:25.943 [2024-11-19 21:12:59.502801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.943 [2024-11-19 21:12:59.627531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.880 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.880 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:26.880 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:26.880 [2024-11-19 21:13:00.638039] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:26.880 [2024-11-19 21:13:00.638140] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:26.880 request: 00:23:26.880 { 00:23:26.880 "name": "key0", 00:23:26.880 "path": "", 00:23:26.880 "method": "keyring_file_add_key", 00:23:26.880 "req_id": 1 00:23:26.880 } 00:23:26.880 Got JSON-RPC error response 00:23:26.880 response: 00:23:26.880 { 00:23:26.880 "code": -1, 00:23:26.880 "message": "Operation not permitted" 00:23:26.880 } 00:23:26.880 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:27.447 [2024-11-19 21:13:00.959007] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:27.447 [2024-11-19 21:13:00.959097] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:27.447 request: 00:23:27.447 { 00:23:27.447 "name": "TLSTEST", 00:23:27.447 "trtype": "tcp", 00:23:27.447 "traddr": "10.0.0.2", 00:23:27.447 "adrfam": "ipv4", 00:23:27.447 "trsvcid": "4420", 00:23:27.447 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.447 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.447 "prchk_reftag": false, 00:23:27.447 "prchk_guard": false, 00:23:27.447 "hdgst": false, 00:23:27.447 "ddgst": false, 00:23:27.447 "psk": "key0", 00:23:27.447 "allow_unrecognized_csi": false, 00:23:27.447 "method": "bdev_nvme_attach_controller", 00:23:27.447 "req_id": 1 00:23:27.447 } 00:23:27.447 Got JSON-RPC error response 00:23:27.447 response: 00:23:27.447 { 00:23:27.447 "code": -126, 00:23:27.447 "message": "Required key not available" 00:23:27.447 } 00:23:27.447 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3028682 00:23:27.447 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3028682 ']' 00:23:27.447 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3028682 00:23:27.447 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:27.447 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.447 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3028682 00:23:27.447 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:27.447 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:27.447 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3028682' 00:23:27.447 killing process with pid 3028682 00:23:27.447 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3028682 00:23:27.447 Received shutdown signal, test time was about 10.000000 seconds 00:23:27.447 00:23:27.447 Latency(us) 00:23:27.447 [2024-11-19T20:13:01.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.447 [2024-11-19T20:13:01.242Z] =================================================================================================================== 00:23:27.447 [2024-11-19T20:13:01.242Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:27.448 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3028682 00:23:28.387 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:28.387 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:28.387 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:28.387 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:28.387 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:28.387 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3024350 00:23:28.387 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3024350 ']' 00:23:28.387 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3024350 00:23:28.387 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:28.387 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.387 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3024350 00:23:28.387 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:28.387 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:28.387 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3024350' 00:23:28.387 killing process with pid 3024350 00:23:28.387 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3024350 00:23:28.387 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3024350 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.cxjXfAwzeZ 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.cxjXfAwzeZ 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3029225 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3029225 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3029225 ']' 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.766 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.766 [2024-11-19 21:13:03.267852] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:23:29.766 [2024-11-19 21:13:03.268005] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.766 [2024-11-19 21:13:03.419957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.766 [2024-11-19 21:13:03.555157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.766 [2024-11-19 21:13:03.555255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.766 [2024-11-19 21:13:03.555281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.766 [2024-11-19 21:13:03.555305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.766 [2024-11-19 21:13:03.555333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.766 [2024-11-19 21:13:03.557044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.708 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.708 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:30.708 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:30.708 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:30.708 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.708 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.708 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.cxjXfAwzeZ 00:23:30.708 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.cxjXfAwzeZ 00:23:30.708 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:31.027 [2024-11-19 21:13:04.536331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.027 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:31.287 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:31.287 [2024-11-19 21:13:05.077964] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:31.287 [2024-11-19 21:13:05.078410] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.546 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:31.804 malloc0 00:23:31.804 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:32.062 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.cxjXfAwzeZ 00:23:32.320 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:32.578 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cxjXfAwzeZ 00:23:32.578 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:32.578 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:32.578 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:32.578 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cxjXfAwzeZ 00:23:32.578 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:32.578 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3029635 00:23:32.578 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:32.578 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:32.578 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3029635 /var/tmp/bdevperf.sock 00:23:32.578 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3029635 ']' 00:23:32.578 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.578 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.578 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.578 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.578 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.838 [2024-11-19 21:13:06.376634] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:23:32.838 [2024-11-19 21:13:06.376777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3029635 ] 00:23:32.838 [2024-11-19 21:13:06.511322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.838 [2024-11-19 21:13:06.629811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.775 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.775 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:33.775 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cxjXfAwzeZ 00:23:34.033 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:34.292 [2024-11-19 21:13:07.873841] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:34.292 TLSTESTn1 00:23:34.292 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:34.551 Running I/O for 10 seconds... 00:23:36.425 2638.00 IOPS, 10.30 MiB/s [2024-11-19T20:13:11.156Z] 2659.50 IOPS, 10.39 MiB/s [2024-11-19T20:13:12.539Z] 2669.67 IOPS, 10.43 MiB/s [2024-11-19T20:13:13.478Z] 2676.50 IOPS, 10.46 MiB/s [2024-11-19T20:13:14.416Z] 2666.40 IOPS, 10.42 MiB/s [2024-11-19T20:13:15.355Z] 2675.17 IOPS, 10.45 MiB/s [2024-11-19T20:13:16.297Z] 2668.43 IOPS, 10.42 MiB/s [2024-11-19T20:13:17.236Z] 2668.88 IOPS, 10.43 MiB/s [2024-11-19T20:13:18.174Z] 2669.11 IOPS, 10.43 MiB/s [2024-11-19T20:13:18.174Z] 2673.40 IOPS, 10.44 MiB/s 00:23:44.379 Latency(us) 00:23:44.379 [2024-11-19T20:13:18.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.379 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:44.379 Verification LBA range: start 0x0 length 0x2000 00:23:44.379 TLSTESTn1 : 10.03 2678.98 10.46 0.00 0.00 47687.96 9417.77 40777.96 00:23:44.379 [2024-11-19T20:13:18.174Z] =================================================================================================================== 00:23:44.379 [2024-11-19T20:13:18.174Z] Total : 2678.98 10.46 0.00 0.00 47687.96 9417.77 40777.96 00:23:44.379 { 00:23:44.379 "results": [ 00:23:44.379 { 00:23:44.379 "job": "TLSTESTn1", 00:23:44.379 "core_mask": "0x4", 00:23:44.379 "workload": "verify", 00:23:44.379 "status": "finished", 00:23:44.379 "verify_range": { 00:23:44.379 "start": 0, 00:23:44.379 "length": 8192 00:23:44.379 }, 00:23:44.379 "queue_depth": 128, 00:23:44.379 "io_size": 4096, 00:23:44.379 "runtime": 10.026207, 00:23:44.379 "iops": 2678.9791992126234, 00:23:44.379 "mibps": 10.46476249692431, 00:23:44.379 "io_failed": 0, 00:23:44.379 "io_timeout": 0, 00:23:44.379 "avg_latency_us": 47687.955111221425, 00:23:44.379 "min_latency_us": 9417.765925925925, 00:23:44.379 "max_latency_us": 40777.955555555556 00:23:44.379 } 00:23:44.379 ], 00:23:44.379 "core_count": 1 00:23:44.379 } 00:23:44.379 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:44.379 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3029635 00:23:44.379 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3029635 ']' 00:23:44.379 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3029635 00:23:44.379 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:44.637 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.637 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3029635 00:23:44.637 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:44.637 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:44.637 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3029635' 00:23:44.637 killing process with pid 3029635 00:23:44.637 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3029635 00:23:44.637 Received shutdown signal, test time was about 10.000000 seconds 00:23:44.637 00:23:44.637 Latency(us) 00:23:44.637 [2024-11-19T20:13:18.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.637 [2024-11-19T20:13:18.432Z] =================================================================================================================== 00:23:44.637 [2024-11-19T20:13:18.432Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:44.637 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3029635 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.cxjXfAwzeZ 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cxjXfAwzeZ 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cxjXfAwzeZ 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cxjXfAwzeZ 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cxjXfAwzeZ 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3031100 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3031100 /var/tmp/bdevperf.sock 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3031100 ']' 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:45.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:45.577 21:13:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.577 [2024-11-19 21:13:19.094358] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:23:45.577 [2024-11-19 21:13:19.094512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3031100 ] 00:23:45.577 [2024-11-19 21:13:19.231732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.577 [2024-11-19 21:13:19.351097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.512 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.512 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:46.512 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cxjXfAwzeZ 00:23:46.770 [2024-11-19 21:13:20.385385] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.cxjXfAwzeZ': 0100666 00:23:46.770 [2024-11-19 21:13:20.385455] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:46.770 request: 00:23:46.770 { 00:23:46.770 "name": "key0", 00:23:46.770 "path": "/tmp/tmp.cxjXfAwzeZ", 00:23:46.770 "method": "keyring_file_add_key", 00:23:46.770 "req_id": 1 00:23:46.770 } 00:23:46.770 Got JSON-RPC error response 00:23:46.770 response: 00:23:46.770 { 00:23:46.770 "code": -1, 00:23:46.770 "message": "Operation not permitted" 00:23:46.770 } 00:23:46.770 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:47.030 [2024-11-19 21:13:20.714398] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:47.030 [2024-11-19 21:13:20.714457] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:47.030 request: 00:23:47.030 { 00:23:47.030 "name": "TLSTEST", 00:23:47.030 "trtype": "tcp", 00:23:47.030 "traddr": "10.0.0.2", 00:23:47.030 "adrfam": "ipv4", 00:23:47.030 "trsvcid": "4420", 00:23:47.030 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.030 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:47.030 "prchk_reftag": false, 00:23:47.030 "prchk_guard": false, 00:23:47.030 "hdgst": false, 00:23:47.030 "ddgst": false, 00:23:47.030 "psk": "key0", 00:23:47.030 "allow_unrecognized_csi": false, 00:23:47.030 "method": "bdev_nvme_attach_controller", 00:23:47.030 "req_id": 1 00:23:47.030 } 00:23:47.030 Got JSON-RPC error response 00:23:47.030 response: 00:23:47.030 { 00:23:47.030 "code": -126, 00:23:47.030 "message": "Required key not available" 00:23:47.030 } 00:23:47.030 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3031100 00:23:47.030 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3031100 ']' 00:23:47.030 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3031100 00:23:47.030 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:47.030 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:47.030 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3031100 00:23:47.030 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:47.030 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:47.030 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3031100' 00:23:47.030 killing process with pid 3031100 00:23:47.030 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3031100 00:23:47.030 Received shutdown signal, test time was about 10.000000 seconds 00:23:47.030 00:23:47.030 Latency(us) 00:23:47.030 [2024-11-19T20:13:20.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.030 [2024-11-19T20:13:20.825Z] =================================================================================================================== 00:23:47.030 [2024-11-19T20:13:20.825Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:47.030 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3031100 00:23:47.968 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:47.968 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:47.968 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:47.968 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:47.968 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:47.968 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3029225 00:23:47.968 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3029225 ']' 00:23:47.968 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3029225 00:23:47.968 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:47.968 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:47.968 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3029225 00:23:47.968 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:47.968 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:47.968 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3029225' 00:23:47.968 killing process with pid 3029225 00:23:47.968 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3029225 00:23:47.968 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3029225 00:23:49.353 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:49.353 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:49.353 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:49.353 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.353 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3031516 00:23:49.353 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:49.353 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3031516 00:23:49.353 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3031516 ']' 00:23:49.353 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.353 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:49.353 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.353 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:49.353 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.353 [2024-11-19 21:13:22.890501] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:23:49.353 [2024-11-19 21:13:22.890660] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.353 [2024-11-19 21:13:23.056586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.612 [2024-11-19 21:13:23.192814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.612 [2024-11-19 21:13:23.192917] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.612 [2024-11-19 21:13:23.192943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.612 [2024-11-19 21:13:23.192968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.612 [2024-11-19 21:13:23.192989] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.612 [2024-11-19 21:13:23.194672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.177 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.177 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:50.177 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:50.177 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:50.177 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.177 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.177 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.cxjXfAwzeZ 00:23:50.177 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:50.177 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.cxjXfAwzeZ 00:23:50.177 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:50.177 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:50.177 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:50.177 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:50.177 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.cxjXfAwzeZ 00:23:50.177 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.cxjXfAwzeZ 00:23:50.177 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:50.435 [2024-11-19 21:13:24.136745] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.435 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:50.692 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:50.950 [2024-11-19 21:13:24.670305] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:50.950 [2024-11-19 21:13:24.670697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.950 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:51.208 malloc0 00:23:51.208 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:51.466 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.cxjXfAwzeZ 00:23:51.725 [2024-11-19 21:13:25.494018] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.cxjXfAwzeZ': 0100666 00:23:51.725 [2024-11-19 21:13:25.494115] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:51.725 request: 00:23:51.725 { 00:23:51.725 "name": "key0", 00:23:51.725 "path": "/tmp/tmp.cxjXfAwzeZ", 00:23:51.725 "method": "keyring_file_add_key", 00:23:51.725 "req_id": 1 00:23:51.725 } 00:23:51.725 Got JSON-RPC error response 00:23:51.725 response: 00:23:51.725 { 00:23:51.725 "code": -1, 00:23:51.725 "message": "Operation not permitted" 00:23:51.725 } 00:23:51.725 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:52.293 [2024-11-19 21:13:25.782877] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:52.293 [2024-11-19 21:13:25.782962] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:52.293 request: 00:23:52.293 { 00:23:52.293 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.293 "host": "nqn.2016-06.io.spdk:host1", 00:23:52.293 "psk": "key0", 00:23:52.293 "method": "nvmf_subsystem_add_host", 00:23:52.293 "req_id": 1 00:23:52.293 } 00:23:52.293 Got JSON-RPC error response 00:23:52.293 response: 00:23:52.293 { 00:23:52.293 "code": -32603, 00:23:52.293 "message": "Internal error" 00:23:52.293 } 00:23:52.293 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:52.293 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:52.293 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:52.293 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:52.293 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3031516 00:23:52.293 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3031516 ']' 00:23:52.293 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3031516 00:23:52.293 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:52.293 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.293 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3031516 00:23:52.293 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:52.293 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:52.293 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3031516' 00:23:52.293 killing process with pid 3031516 00:23:52.293 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3031516 00:23:52.293 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3031516 00:23:53.671 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.cxjXfAwzeZ 00:23:53.671 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:53.671 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:53.671 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.671 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.671 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3032073 00:23:53.671 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:53.671 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3032073 00:23:53.671 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3032073 ']' 00:23:53.671 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.671 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.671 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.671 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.671 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.671 [2024-11-19 21:13:27.179346] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:23:53.671 [2024-11-19 21:13:27.179506] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.671 [2024-11-19 21:13:27.322826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.671 [2024-11-19 21:13:27.452177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.671 [2024-11-19 21:13:27.452266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.671 [2024-11-19 21:13:27.452292] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.671 [2024-11-19 21:13:27.452318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.671 [2024-11-19 21:13:27.452338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.671 [2024-11-19 21:13:27.453967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.605 21:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.605 21:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:54.605 21:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:54.605 21:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:54.605 21:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.605 21:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.605 21:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.cxjXfAwzeZ 00:23:54.605 21:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.cxjXfAwzeZ 00:23:54.605 21:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:54.863 [2024-11-19 21:13:28.465207] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.863 21:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:55.121 21:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:55.379 [2024-11-19 21:13:29.094954] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:55.379 [2024-11-19 21:13:29.095345] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.379 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:55.638 malloc0 00:23:55.638 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:56.203 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.cxjXfAwzeZ 00:23:56.460 21:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:56.720 21:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3032490 00:23:56.720 21:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:56.720 21:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:56.720 21:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3032490 /var/tmp/bdevperf.sock 00:23:56.720 21:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3032490 ']' 00:23:56.720 21:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:56.720 21:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.720 21:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:56.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:56.720 21:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.720 21:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.720 [2024-11-19 21:13:30.441734] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:23:56.720 [2024-11-19 21:13:30.441896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3032490 ] 00:23:57.044 [2024-11-19 21:13:30.579023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.044 [2024-11-19 21:13:30.712474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.980 21:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.980 21:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:57.980 21:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cxjXfAwzeZ 00:23:57.980 21:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:58.240 [2024-11-19 21:13:31.941952] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:58.240 TLSTESTn1 00:23:58.498 21:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:58.757 21:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:58.757 "subsystems": [ 00:23:58.757 { 00:23:58.757 "subsystem": "keyring", 00:23:58.757 "config": [ 00:23:58.757 { 00:23:58.757 "method": "keyring_file_add_key", 00:23:58.757 "params": { 00:23:58.757 "name": "key0", 00:23:58.757 "path": "/tmp/tmp.cxjXfAwzeZ" 00:23:58.757 } 00:23:58.757 } 00:23:58.757 ] 00:23:58.757 }, 00:23:58.757 { 00:23:58.757 "subsystem": "iobuf", 00:23:58.757 "config": [ 00:23:58.757 { 00:23:58.757 "method": "iobuf_set_options", 00:23:58.757 "params": { 00:23:58.757 "small_pool_count": 8192, 00:23:58.757 "large_pool_count": 1024, 00:23:58.757 "small_bufsize": 8192, 00:23:58.757 "large_bufsize": 135168, 00:23:58.757 "enable_numa": false 00:23:58.757 } 00:23:58.757 } 00:23:58.757 ] 00:23:58.757 }, 00:23:58.757 { 00:23:58.757 "subsystem": "sock", 00:23:58.757 "config": [ 00:23:58.757 { 00:23:58.757 "method": "sock_set_default_impl", 00:23:58.757 "params": { 00:23:58.757 "impl_name": "posix" 00:23:58.757 } 00:23:58.757 }, 00:23:58.757 { 00:23:58.757 "method": "sock_impl_set_options", 00:23:58.757 "params": { 00:23:58.757 "impl_name": "ssl", 00:23:58.757 "recv_buf_size": 4096, 00:23:58.757 "send_buf_size": 4096, 00:23:58.757 "enable_recv_pipe": true, 00:23:58.757 "enable_quickack": false, 00:23:58.757 "enable_placement_id": 0, 00:23:58.757 "enable_zerocopy_send_server": true, 00:23:58.757 "enable_zerocopy_send_client": false, 00:23:58.757 "zerocopy_threshold": 0, 00:23:58.757 "tls_version": 0, 00:23:58.757 "enable_ktls": false 00:23:58.757 } 00:23:58.757 }, 00:23:58.757 { 00:23:58.757 "method": "sock_impl_set_options", 00:23:58.757 "params": { 00:23:58.757 "impl_name": "posix", 00:23:58.757 "recv_buf_size": 2097152, 00:23:58.757 "send_buf_size": 2097152, 00:23:58.757 "enable_recv_pipe": true, 00:23:58.757 "enable_quickack": false, 00:23:58.757 "enable_placement_id": 0, 00:23:58.757 "enable_zerocopy_send_server": true, 00:23:58.757 "enable_zerocopy_send_client": false, 00:23:58.757 "zerocopy_threshold": 0, 00:23:58.757 "tls_version": 0, 00:23:58.757 "enable_ktls": false 00:23:58.757 } 00:23:58.757 } 00:23:58.757 ] 00:23:58.757 }, 00:23:58.757 { 00:23:58.757 "subsystem": "vmd", 00:23:58.757 "config": [] 00:23:58.757 }, 00:23:58.757 { 00:23:58.757 "subsystem": "accel", 00:23:58.757 "config": [ 00:23:58.757 { 00:23:58.757 "method": "accel_set_options", 00:23:58.757 "params": { 00:23:58.757 "small_cache_size": 128, 00:23:58.757 "large_cache_size": 16, 00:23:58.757 "task_count": 2048, 00:23:58.757 "sequence_count": 2048, 00:23:58.757 "buf_count": 2048 00:23:58.757 } 00:23:58.757 } 00:23:58.757 ] 00:23:58.757 }, 00:23:58.757 { 00:23:58.757 "subsystem": "bdev", 00:23:58.757 "config": [ 00:23:58.757 { 00:23:58.757 "method": "bdev_set_options", 00:23:58.757 "params": { 00:23:58.757 "bdev_io_pool_size": 65535, 00:23:58.757 "bdev_io_cache_size": 256, 00:23:58.757 "bdev_auto_examine": true, 00:23:58.757 "iobuf_small_cache_size": 128, 00:23:58.757 "iobuf_large_cache_size": 16 00:23:58.757 } 00:23:58.757 }, 00:23:58.757 { 00:23:58.757 "method": "bdev_raid_set_options", 00:23:58.757 "params": { 00:23:58.757 "process_window_size_kb": 1024, 00:23:58.757 "process_max_bandwidth_mb_sec": 0 00:23:58.757 } 00:23:58.757 }, 00:23:58.757 { 00:23:58.757 "method": "bdev_iscsi_set_options", 00:23:58.757 "params": { 00:23:58.757 "timeout_sec": 30 00:23:58.757 } 00:23:58.757 }, 00:23:58.757 { 00:23:58.757 "method": "bdev_nvme_set_options", 00:23:58.757 "params": { 00:23:58.757 "action_on_timeout": "none", 00:23:58.757 "timeout_us": 0, 00:23:58.757 "timeout_admin_us": 0, 00:23:58.757 "keep_alive_timeout_ms": 10000, 00:23:58.757 "arbitration_burst": 0, 00:23:58.757 "low_priority_weight": 0, 00:23:58.757 "medium_priority_weight": 0, 00:23:58.757 "high_priority_weight": 0, 00:23:58.757 "nvme_adminq_poll_period_us": 10000, 00:23:58.757 "nvme_ioq_poll_period_us": 0, 00:23:58.757 "io_queue_requests": 0, 00:23:58.757 "delay_cmd_submit": true, 00:23:58.757 "transport_retry_count": 4, 00:23:58.757 "bdev_retry_count": 3, 00:23:58.757 "transport_ack_timeout": 0, 00:23:58.757 "ctrlr_loss_timeout_sec": 0, 00:23:58.757 "reconnect_delay_sec": 0, 00:23:58.757 "fast_io_fail_timeout_sec": 0, 00:23:58.757 "disable_auto_failback": false, 00:23:58.757 "generate_uuids": false, 00:23:58.758 "transport_tos": 0, 00:23:58.758 "nvme_error_stat": false, 00:23:58.758 "rdma_srq_size": 0, 00:23:58.758 "io_path_stat": false, 00:23:58.758 "allow_accel_sequence": false, 00:23:58.758 "rdma_max_cq_size": 0, 00:23:58.758 "rdma_cm_event_timeout_ms": 0, 00:23:58.758 "dhchap_digests": [ 00:23:58.758 "sha256", 00:23:58.758 "sha384", 00:23:58.758 "sha512" 00:23:58.758 ], 00:23:58.758 "dhchap_dhgroups": [ 00:23:58.758 "null", 00:23:58.758 "ffdhe2048", 00:23:58.758 "ffdhe3072", 00:23:58.758 "ffdhe4096", 00:23:58.758 "ffdhe6144", 00:23:58.758 "ffdhe8192" 00:23:58.758 ] 00:23:58.758 } 00:23:58.758 }, 00:23:58.758 { 00:23:58.758 "method": "bdev_nvme_set_hotplug", 00:23:58.758 "params": { 00:23:58.758 "period_us": 100000, 00:23:58.758 "enable": false 00:23:58.758 } 00:23:58.758 }, 00:23:58.758 { 00:23:58.758 "method": "bdev_malloc_create", 00:23:58.758 "params": { 00:23:58.758 "name": "malloc0", 00:23:58.758 "num_blocks": 8192, 00:23:58.758 "block_size": 4096, 00:23:58.758 "physical_block_size": 4096, 00:23:58.758 "uuid": "b7977315-8b33-4fad-bb34-9a59ef377455", 00:23:58.758 "optimal_io_boundary": 0, 00:23:58.758 "md_size": 0, 00:23:58.758 "dif_type": 0, 00:23:58.758 "dif_is_head_of_md": false, 00:23:58.758 "dif_pi_format": 0 00:23:58.758 } 00:23:58.758 }, 00:23:58.758 { 00:23:58.758 "method": "bdev_wait_for_examine" 00:23:58.758 } 00:23:58.758 ] 00:23:58.758 }, 00:23:58.758 { 00:23:58.758 "subsystem": "nbd", 00:23:58.758 "config": [] 00:23:58.758 }, 00:23:58.758 { 00:23:58.758 "subsystem": "scheduler", 00:23:58.758 "config": [ 00:23:58.758 { 00:23:58.758 "method": "framework_set_scheduler", 00:23:58.758 "params": { 00:23:58.758 "name": "static" 00:23:58.758 } 00:23:58.758 } 00:23:58.758 ] 00:23:58.758 }, 00:23:58.758 { 00:23:58.758 "subsystem": "nvmf", 00:23:58.758 "config": [ 00:23:58.758 { 00:23:58.758 "method": "nvmf_set_config", 00:23:58.758 "params": { 00:23:58.758 "discovery_filter": "match_any", 00:23:58.758 "admin_cmd_passthru": { 00:23:58.758 "identify_ctrlr": false 00:23:58.758 }, 00:23:58.758 "dhchap_digests": [ 00:23:58.758 "sha256", 00:23:58.758 "sha384", 00:23:58.758 "sha512" 00:23:58.758 ], 00:23:58.758 "dhchap_dhgroups": [ 00:23:58.758 "null", 00:23:58.758 "ffdhe2048", 00:23:58.758 "ffdhe3072", 00:23:58.758 "ffdhe4096", 00:23:58.758 "ffdhe6144", 00:23:58.758 "ffdhe8192" 00:23:58.758 ] 00:23:58.758 } 00:23:58.758 }, 00:23:58.758 { 00:23:58.758 "method": "nvmf_set_max_subsystems", 00:23:58.758 "params": { 00:23:58.758 "max_subsystems": 1024 00:23:58.758 } 00:23:58.758 }, 00:23:58.758 { 00:23:58.758 "method": "nvmf_set_crdt", 00:23:58.758 "params": { 00:23:58.758 "crdt1": 0, 00:23:58.758 "crdt2": 0, 00:23:58.758 "crdt3": 0 00:23:58.758 } 00:23:58.758 }, 00:23:58.758 { 00:23:58.758 "method": "nvmf_create_transport", 00:23:58.758 "params": { 00:23:58.758 "trtype": "TCP", 00:23:58.758 "max_queue_depth": 128, 00:23:58.758 "max_io_qpairs_per_ctrlr": 127, 00:23:58.758 "in_capsule_data_size": 4096, 00:23:58.758 "max_io_size": 131072, 00:23:58.758 "io_unit_size": 131072, 00:23:58.758 "max_aq_depth": 128, 00:23:58.758 "num_shared_buffers": 511, 00:23:58.758 "buf_cache_size": 4294967295, 00:23:58.758 "dif_insert_or_strip": false, 00:23:58.758 "zcopy": false, 00:23:58.758 "c2h_success": false, 00:23:58.758 "sock_priority": 0, 00:23:58.758 "abort_timeout_sec": 1, 00:23:58.758 "ack_timeout": 0, 00:23:58.758 "data_wr_pool_size": 0 00:23:58.758 } 00:23:58.758 }, 00:23:58.758 { 00:23:58.758 "method": "nvmf_create_subsystem", 00:23:58.758 "params": { 00:23:58.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.758 "allow_any_host": false, 00:23:58.758 "serial_number": "SPDK00000000000001", 00:23:58.758 "model_number": "SPDK bdev Controller", 00:23:58.758 "max_namespaces": 10, 00:23:58.758 "min_cntlid": 1, 00:23:58.758 "max_cntlid": 65519, 00:23:58.758 "ana_reporting": false 00:23:58.758 } 00:23:58.758 }, 00:23:58.758 { 00:23:58.758 "method": "nvmf_subsystem_add_host", 00:23:58.758 "params": { 00:23:58.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.758 "host": "nqn.2016-06.io.spdk:host1", 00:23:58.758 "psk": "key0" 00:23:58.758 } 00:23:58.758 }, 00:23:58.758 { 00:23:58.758 "method": "nvmf_subsystem_add_ns", 00:23:58.758 "params": { 00:23:58.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.758 "namespace": { 00:23:58.758 "nsid": 1, 00:23:58.758 "bdev_name": "malloc0", 00:23:58.758 "nguid": "B79773158B334FADBB349A59EF377455", 00:23:58.758 "uuid": "b7977315-8b33-4fad-bb34-9a59ef377455", 00:23:58.758 "no_auto_visible": false 00:23:58.758 } 00:23:58.758 } 00:23:58.758 }, 00:23:58.758 { 00:23:58.758 "method": "nvmf_subsystem_add_listener", 00:23:58.758 "params": { 00:23:58.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.758 "listen_address": { 00:23:58.758 "trtype": "TCP", 00:23:58.758 "adrfam": "IPv4", 00:23:58.758 "traddr": "10.0.0.2", 00:23:58.758 "trsvcid": "4420" 00:23:58.758 }, 00:23:58.758 "secure_channel": true 00:23:58.758 } 00:23:58.758 } 00:23:58.758 ] 00:23:58.758 } 00:23:58.758 ] 00:23:58.758 }' 00:23:58.758 21:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:59.017 21:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:59.017 "subsystems": [ 00:23:59.017 { 00:23:59.017 "subsystem": "keyring", 00:23:59.017 "config": [ 00:23:59.017 { 00:23:59.017 "method": "keyring_file_add_key", 00:23:59.017 "params": { 00:23:59.017 "name": "key0", 00:23:59.017 "path": "/tmp/tmp.cxjXfAwzeZ" 00:23:59.017 } 00:23:59.017 } 00:23:59.017 ] 00:23:59.017 }, 00:23:59.017 { 00:23:59.017 "subsystem": "iobuf", 00:23:59.017 "config": [ 00:23:59.017 { 00:23:59.017 "method": "iobuf_set_options", 00:23:59.017 "params": { 00:23:59.017 "small_pool_count": 8192, 00:23:59.017 "large_pool_count": 1024, 00:23:59.017 "small_bufsize": 8192, 00:23:59.017 "large_bufsize": 135168, 00:23:59.017 "enable_numa": false 00:23:59.017 } 00:23:59.017 } 00:23:59.017 ] 00:23:59.017 }, 00:23:59.017 { 00:23:59.017 "subsystem": "sock", 00:23:59.017 "config": [ 00:23:59.017 { 00:23:59.017 "method": "sock_set_default_impl", 00:23:59.017 "params": { 00:23:59.017 "impl_name": "posix" 00:23:59.017 } 00:23:59.017 }, 00:23:59.017 { 00:23:59.017 "method": "sock_impl_set_options", 00:23:59.017 "params": { 00:23:59.017 "impl_name": "ssl", 00:23:59.017 "recv_buf_size": 4096, 00:23:59.017 "send_buf_size": 4096, 00:23:59.017 "enable_recv_pipe": true, 00:23:59.017 "enable_quickack": false, 00:23:59.017 "enable_placement_id": 0, 00:23:59.017 "enable_zerocopy_send_server": true, 00:23:59.017 "enable_zerocopy_send_client": false, 00:23:59.017 "zerocopy_threshold": 0, 00:23:59.017 "tls_version": 0, 00:23:59.017 "enable_ktls": false 00:23:59.017 } 00:23:59.017 }, 00:23:59.017 { 00:23:59.017 "method": "sock_impl_set_options", 00:23:59.017 "params": { 00:23:59.017 "impl_name": "posix", 00:23:59.017 "recv_buf_size": 2097152, 00:23:59.017 "send_buf_size": 2097152, 00:23:59.017 "enable_recv_pipe": true, 00:23:59.017 "enable_quickack": false, 00:23:59.017 "enable_placement_id": 0, 00:23:59.017 "enable_zerocopy_send_server": true, 00:23:59.017 "enable_zerocopy_send_client": false, 00:23:59.017 "zerocopy_threshold": 0, 00:23:59.017 "tls_version": 0, 00:23:59.017 "enable_ktls": false 00:23:59.017 } 00:23:59.017 } 00:23:59.017 ] 00:23:59.018 }, 00:23:59.018 { 00:23:59.018 "subsystem": "vmd", 00:23:59.018 "config": [] 00:23:59.018 }, 00:23:59.018 { 00:23:59.018 "subsystem": "accel", 00:23:59.018 "config": [ 00:23:59.018 { 00:23:59.018 "method": "accel_set_options", 00:23:59.018 "params": { 00:23:59.018 "small_cache_size": 128, 00:23:59.018 "large_cache_size": 16, 00:23:59.018 "task_count": 2048, 00:23:59.018 "sequence_count": 2048, 00:23:59.018 "buf_count": 2048 00:23:59.018 } 00:23:59.018 } 00:23:59.018 ] 00:23:59.018 }, 00:23:59.018 { 00:23:59.018 "subsystem": "bdev", 00:23:59.018 "config": [ 00:23:59.018 { 00:23:59.018 "method": "bdev_set_options", 00:23:59.018 "params": { 00:23:59.018 "bdev_io_pool_size": 65535, 00:23:59.018 "bdev_io_cache_size": 256, 00:23:59.018 "bdev_auto_examine": true, 00:23:59.018 "iobuf_small_cache_size": 128, 00:23:59.018 "iobuf_large_cache_size": 16 00:23:59.018 } 00:23:59.018 }, 00:23:59.018 { 00:23:59.018 "method": "bdev_raid_set_options", 00:23:59.018 "params": { 00:23:59.018 "process_window_size_kb": 1024, 00:23:59.018 "process_max_bandwidth_mb_sec": 0 00:23:59.018 } 00:23:59.018 }, 00:23:59.018 { 00:23:59.018 "method": "bdev_iscsi_set_options", 00:23:59.018 "params": { 00:23:59.018 "timeout_sec": 30 00:23:59.018 } 00:23:59.018 }, 00:23:59.018 { 00:23:59.018 "method": "bdev_nvme_set_options", 00:23:59.018 "params": { 00:23:59.018 "action_on_timeout": "none", 00:23:59.018 "timeout_us": 0, 00:23:59.018 "timeout_admin_us": 0, 00:23:59.018 "keep_alive_timeout_ms": 10000, 00:23:59.018 "arbitration_burst": 0, 00:23:59.018 "low_priority_weight": 0, 00:23:59.018 "medium_priority_weight": 0, 00:23:59.018 "high_priority_weight": 0, 00:23:59.018 "nvme_adminq_poll_period_us": 10000, 00:23:59.018 "nvme_ioq_poll_period_us": 0, 00:23:59.018 "io_queue_requests": 512, 00:23:59.018 "delay_cmd_submit": true, 00:23:59.018 "transport_retry_count": 4, 00:23:59.018 "bdev_retry_count": 3, 00:23:59.018 "transport_ack_timeout": 0, 00:23:59.018 "ctrlr_loss_timeout_sec": 0, 00:23:59.018 "reconnect_delay_sec": 0, 00:23:59.018 "fast_io_fail_timeout_sec": 0, 00:23:59.018 "disable_auto_failback": false, 00:23:59.018 "generate_uuids": false, 00:23:59.018 "transport_tos": 0, 00:23:59.018 "nvme_error_stat": false, 00:23:59.018 "rdma_srq_size": 0, 00:23:59.018 "io_path_stat": false, 00:23:59.018 "allow_accel_sequence": false, 00:23:59.018 "rdma_max_cq_size": 0, 00:23:59.018 "rdma_cm_event_timeout_ms": 0, 00:23:59.018 "dhchap_digests": [ 00:23:59.018 "sha256", 00:23:59.018 "sha384", 00:23:59.018 "sha512" 00:23:59.018 ], 00:23:59.018 "dhchap_dhgroups": [ 00:23:59.018 "null", 00:23:59.018 "ffdhe2048", 00:23:59.018 "ffdhe3072", 00:23:59.018 "ffdhe4096", 00:23:59.018 "ffdhe6144", 00:23:59.018 "ffdhe8192" 00:23:59.018 ] 00:23:59.018 } 00:23:59.018 }, 00:23:59.018 { 00:23:59.018 "method": "bdev_nvme_attach_controller", 00:23:59.018 "params": { 00:23:59.018 "name": "TLSTEST", 00:23:59.018 "trtype": "TCP", 00:23:59.018 "adrfam": "IPv4", 00:23:59.018 "traddr": "10.0.0.2", 00:23:59.018 "trsvcid": "4420", 00:23:59.018 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.018 "prchk_reftag": false, 00:23:59.018 "prchk_guard": false, 00:23:59.018 "ctrlr_loss_timeout_sec": 0, 00:23:59.018 "reconnect_delay_sec": 0, 00:23:59.018 "fast_io_fail_timeout_sec": 0, 00:23:59.018 "psk": "key0", 00:23:59.018 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:59.018 "hdgst": false, 00:23:59.018 "ddgst": false, 00:23:59.018 "multipath": "multipath" 00:23:59.018 } 00:23:59.018 }, 00:23:59.018 { 00:23:59.018 "method": "bdev_nvme_set_hotplug", 00:23:59.018 "params": { 00:23:59.018 "period_us": 100000, 00:23:59.018 "enable": false 00:23:59.018 } 00:23:59.018 }, 00:23:59.018 { 00:23:59.018 "method": "bdev_wait_for_examine" 00:23:59.018 } 00:23:59.018 ] 00:23:59.018 }, 00:23:59.018 { 00:23:59.018 "subsystem": "nbd", 00:23:59.018 "config": [] 00:23:59.018 } 00:23:59.018 ] 00:23:59.018 }' 00:23:59.018 21:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3032490 00:23:59.018 21:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3032490 ']' 00:23:59.018 21:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3032490 00:23:59.018 21:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:59.018 21:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.018 21:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3032490 00:23:59.276 21:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:59.276 21:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:59.276 21:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3032490' 00:23:59.277 killing process with pid 3032490 00:23:59.277 21:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3032490 00:23:59.277 Received shutdown signal, test time was about 10.000000 seconds 00:23:59.277 00:23:59.277 Latency(us) 00:23:59.277 [2024-11-19T20:13:33.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.277 [2024-11-19T20:13:33.072Z] =================================================================================================================== 00:23:59.277 [2024-11-19T20:13:33.072Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:59.277 21:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3032490 00:23:59.845 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3032073 00:23:59.845 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3032073 ']' 00:23:59.845 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3032073 00:23:59.845 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:59.845 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.845 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3032073 00:24:00.105 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:00.105 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:00.105 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3032073' 00:24:00.105 killing process with pid 3032073 00:24:00.105 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3032073 00:24:00.105 21:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3032073 00:24:01.045 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:01.045 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:01.045 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:01.045 "subsystems": [ 00:24:01.045 { 00:24:01.045 "subsystem": "keyring", 00:24:01.045 "config": [ 00:24:01.045 { 00:24:01.045 "method": "keyring_file_add_key", 00:24:01.045 "params": { 00:24:01.045 "name": "key0", 00:24:01.045 "path": "/tmp/tmp.cxjXfAwzeZ" 00:24:01.045 } 00:24:01.045 } 00:24:01.045 ] 00:24:01.045 }, 00:24:01.045 { 00:24:01.045 "subsystem": "iobuf", 00:24:01.045 "config": [ 00:24:01.045 { 00:24:01.045 "method": "iobuf_set_options", 00:24:01.045 "params": { 00:24:01.045 "small_pool_count": 8192, 00:24:01.045 "large_pool_count": 1024, 00:24:01.045 "small_bufsize": 8192, 00:24:01.045 "large_bufsize": 135168, 00:24:01.045 "enable_numa": false 00:24:01.045 } 00:24:01.045 } 00:24:01.045 ] 00:24:01.045 }, 00:24:01.045 { 00:24:01.045 "subsystem": "sock", 00:24:01.045 "config": [ 00:24:01.045 { 00:24:01.045 "method": "sock_set_default_impl", 00:24:01.045 "params": { 00:24:01.045 "impl_name": "posix" 00:24:01.045 } 00:24:01.045 }, 00:24:01.045 { 00:24:01.045 "method": "sock_impl_set_options", 00:24:01.045 "params": { 00:24:01.045 "impl_name": "ssl", 00:24:01.045 "recv_buf_size": 4096, 00:24:01.045 "send_buf_size": 4096, 00:24:01.045 "enable_recv_pipe": true, 00:24:01.045 "enable_quickack": false, 00:24:01.045 "enable_placement_id": 0, 00:24:01.045 "enable_zerocopy_send_server": true, 00:24:01.045 "enable_zerocopy_send_client": false, 00:24:01.045 "zerocopy_threshold": 0, 00:24:01.045 "tls_version": 0, 00:24:01.045 "enable_ktls": false 00:24:01.045 } 00:24:01.045 }, 00:24:01.045 { 00:24:01.045 "method": "sock_impl_set_options", 00:24:01.045 "params": { 00:24:01.045 "impl_name": "posix", 00:24:01.045 "recv_buf_size": 2097152, 00:24:01.045 "send_buf_size": 2097152, 00:24:01.045 "enable_recv_pipe": true, 00:24:01.045 "enable_quickack": false, 00:24:01.045 "enable_placement_id": 0, 00:24:01.046 "enable_zerocopy_send_server": true, 00:24:01.046 "enable_zerocopy_send_client": false, 00:24:01.046 "zerocopy_threshold": 0, 00:24:01.046 "tls_version": 0, 00:24:01.046 "enable_ktls": false 00:24:01.046 } 00:24:01.046 } 00:24:01.046 ] 00:24:01.046 }, 00:24:01.046 { 00:24:01.046 "subsystem": "vmd", 00:24:01.046 "config": [] 00:24:01.046 }, 00:24:01.046 { 00:24:01.046 "subsystem": "accel", 00:24:01.046 "config": [ 00:24:01.046 { 00:24:01.046 "method": "accel_set_options", 00:24:01.046 "params": { 00:24:01.046 "small_cache_size": 128, 00:24:01.046 "large_cache_size": 16, 00:24:01.046 "task_count": 2048, 00:24:01.046 "sequence_count": 2048, 00:24:01.046 "buf_count": 2048 00:24:01.046 } 00:24:01.046 } 00:24:01.046 ] 00:24:01.046 }, 00:24:01.046 { 00:24:01.046 "subsystem": "bdev", 00:24:01.046 "config": [ 00:24:01.046 { 00:24:01.046 "method": "bdev_set_options", 00:24:01.046 "params": { 00:24:01.046 "bdev_io_pool_size": 65535, 00:24:01.046 "bdev_io_cache_size": 256, 00:24:01.046 "bdev_auto_examine": true, 00:24:01.046 "iobuf_small_cache_size": 128, 00:24:01.046 "iobuf_large_cache_size": 16 00:24:01.046 } 00:24:01.046 }, 00:24:01.046 { 00:24:01.046 "method": "bdev_raid_set_options", 00:24:01.046 "params": { 00:24:01.046 "process_window_size_kb": 1024, 00:24:01.046 "process_max_bandwidth_mb_sec": 0 00:24:01.046 } 00:24:01.046 }, 00:24:01.046 { 00:24:01.046 "method": "bdev_iscsi_set_options", 00:24:01.046 "params": { 00:24:01.046 "timeout_sec": 30 00:24:01.046 } 00:24:01.046 }, 00:24:01.046 { 00:24:01.046 "method": "bdev_nvme_set_options", 00:24:01.046 "params": { 00:24:01.046 "action_on_timeout": "none", 00:24:01.046 "timeout_us": 0, 00:24:01.046 "timeout_admin_us": 0, 00:24:01.046 "keep_alive_timeout_ms": 10000, 00:24:01.046 "arbitration_burst": 0, 00:24:01.046 "low_priority_weight": 0, 00:24:01.046 "medium_priority_weight": 0, 00:24:01.046 "high_priority_weight": 0, 00:24:01.046 "nvme_adminq_poll_period_us": 10000, 00:24:01.046 "nvme_ioq_poll_period_us": 0, 00:24:01.046 "io_queue_requests": 0, 00:24:01.046 "delay_cmd_submit": true, 00:24:01.046 "transport_retry_count": 4, 00:24:01.046 "bdev_retry_count": 3, 00:24:01.046 "transport_ack_timeout": 0, 00:24:01.046 "ctrlr_loss_timeout_sec": 0, 00:24:01.046 "reconnect_delay_sec": 0, 00:24:01.046 "fast_io_fail_timeout_sec": 0, 00:24:01.046 "disable_auto_failback": false, 00:24:01.046 "generate_uuids": false, 00:24:01.046 "transport_tos": 0, 00:24:01.046 "nvme_error_stat": false, 00:24:01.046 "rdma_srq_size": 0, 00:24:01.046 "io_path_stat": false, 00:24:01.046 "allow_accel_sequence": false, 00:24:01.046 "rdma_max_cq_size": 0, 00:24:01.046 "rdma_cm_event_timeout_ms": 0, 00:24:01.046 "dhchap_digests": [ 00:24:01.046 "sha256", 00:24:01.046 "sha384", 00:24:01.046 "sha512" 00:24:01.046 ], 00:24:01.046 "dhchap_dhgroups": [ 00:24:01.046 "null", 00:24:01.046 "ffdhe2048", 00:24:01.046 "ffdhe3072", 00:24:01.046 "ffdhe4096", 00:24:01.046 "ffdhe6144", 00:24:01.046 "ffdhe8192" 00:24:01.046 ] 00:24:01.046 } 00:24:01.046 }, 00:24:01.046 { 00:24:01.046 "method": "bdev_nvme_set_hotplug", 00:24:01.046 "params": { 00:24:01.046 "period_us": 100000, 00:24:01.046 "enable": false 00:24:01.046 } 00:24:01.046 }, 00:24:01.046 { 00:24:01.046 "method": "bdev_malloc_create", 00:24:01.046 "params": { 00:24:01.046 "name": "malloc0", 00:24:01.046 "num_blocks": 8192, 00:24:01.046 "block_size": 4096, 00:24:01.046 "physical_block_size": 4096, 00:24:01.046 "uuid": "b7977315-8b33-4fad-bb34-9a59ef377455", 00:24:01.046 "optimal_io_boundary": 0, 00:24:01.046 "md_size": 0, 00:24:01.046 "dif_type": 0, 00:24:01.046 "dif_is_head_of_md": false, 00:24:01.046 "dif_pi_format": 0 00:24:01.046 } 00:24:01.046 }, 00:24:01.046 { 00:24:01.046 "method": "bdev_wait_for_examine" 00:24:01.046 } 00:24:01.046 ] 00:24:01.046 }, 00:24:01.046 { 00:24:01.046 "subsystem": "nbd", 00:24:01.046 "config": [] 00:24:01.046 }, 00:24:01.046 { 00:24:01.046 "subsystem": "scheduler", 00:24:01.046 "config": [ 00:24:01.046 { 00:24:01.046 "method": "framework_set_scheduler", 00:24:01.046 "params": { 00:24:01.046 "name": "static" 00:24:01.046 } 00:24:01.046 } 00:24:01.046 ] 00:24:01.046 }, 00:24:01.046 { 00:24:01.046 "subsystem": "nvmf", 00:24:01.046 "config": [ 00:24:01.046 { 00:24:01.046 "method": "nvmf_set_config", 00:24:01.046 "params": { 00:24:01.046 "discovery_filter": "match_any", 00:24:01.046 "admin_cmd_passthru": { 00:24:01.046 "identify_ctrlr": false 00:24:01.046 }, 00:24:01.046 "dhchap_digests": [ 00:24:01.046 "sha256", 00:24:01.046 "sha384", 00:24:01.046 "sha512" 00:24:01.046 ], 00:24:01.046 "dhchap_dhgroups": [ 00:24:01.046 "null", 00:24:01.046 "ffdhe2048", 00:24:01.046 "ffdhe3072", 00:24:01.046 "ffdhe4096", 00:24:01.046 "ffdhe6144", 00:24:01.046 "ffdhe8192" 00:24:01.046 ] 00:24:01.046 } 00:24:01.046 }, 00:24:01.046 { 00:24:01.046 "method": "nvmf_set_max_subsystems", 00:24:01.046 "params": { 00:24:01.046 "max_subsystems": 1024 00:24:01.046 } 00:24:01.046 }, 00:24:01.046 { 00:24:01.046 "method": "nvmf_set_crdt", 00:24:01.046 "params": { 00:24:01.046 "crdt1": 0, 00:24:01.046 "crdt2": 0, 00:24:01.046 "crdt3": 0 00:24:01.046 } 00:24:01.046 }, 00:24:01.046 { 00:24:01.046 "method": "nvmf_create_transport", 00:24:01.046 "params": { 00:24:01.046 "trtype": "TCP", 00:24:01.046 "max_queue_depth": 128, 00:24:01.046 "max_io_qpairs_per_ctrlr": 127, 00:24:01.046 "in_capsule_data_size": 4096, 00:24:01.046 "max_io_size": 131072, 00:24:01.046 "io_unit_size": 131072, 00:24:01.046 "max_aq_depth": 128, 00:24:01.046 "num_shared_buffers": 511, 00:24:01.046 "buf_cache_size": 4294967295, 00:24:01.046 "dif_insert_or_strip": false, 00:24:01.046 "zcopy": false, 00:24:01.046 "c2h_success": false, 00:24:01.046 "sock_priority": 0, 00:24:01.046 "abort_timeout_sec": 1, 00:24:01.046 "ack_timeout": 0, 00:24:01.046 "data_wr_pool_size": 0 00:24:01.046 } 00:24:01.046 }, 00:24:01.046 { 00:24:01.046 "method": "nvmf_create_subsystem", 00:24:01.046 "params": { 00:24:01.046 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.046 "allow_any_host": false, 00:24:01.046 "serial_number": "SPDK00000000000001", 00:24:01.046 "model_number": "SPDK bdev Controller", 00:24:01.046 "max_namespaces": 10, 00:24:01.046 "min_cntlid": 1, 00:24:01.046 "max_cntlid": 65519, 00:24:01.046 "ana_reporting": false 00:24:01.046 } 00:24:01.046 }, 00:24:01.046 { 00:24:01.046 "method": "nvmf_subsystem_add_host", 00:24:01.046 "params": { 00:24:01.046 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.046 "host": "nqn.2016-06.io.spdk:host1", 00:24:01.046 "psk": "key0" 00:24:01.046 } 00:24:01.046 }, 00:24:01.046 { 00:24:01.046 "method": "nvmf_subsystem_add_ns", 00:24:01.046 "params": { 00:24:01.046 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.046 "namespace": { 00:24:01.046 "nsid": 1, 00:24:01.046 "bdev_name": "malloc0", 00:24:01.046 "nguid": "B79773158B334FADBB349A59EF377455", 00:24:01.046 "uuid": "b7977315-8b33-4fad-bb34-9a59ef377455", 00:24:01.046 "no_auto_visible": false 00:24:01.046 } 00:24:01.046 } 00:24:01.046 }, 00:24:01.046 { 00:24:01.046 "method": "nvmf_subsystem_add_listener", 00:24:01.046 "params": { 00:24:01.046 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.046 "listen_address": { 00:24:01.047 "trtype": "TCP", 00:24:01.047 "adrfam": "IPv4", 00:24:01.047 "traddr": "10.0.0.2", 00:24:01.047 "trsvcid": "4420" 00:24:01.047 }, 00:24:01.047 "secure_channel": true 00:24:01.047 } 00:24:01.047 } 00:24:01.047 ] 00:24:01.047 } 00:24:01.047 ] 00:24:01.047 }' 00:24:01.047 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:01.047 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.047 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3033040 00:24:01.047 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:01.047 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3033040 00:24:01.047 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3033040 ']' 00:24:01.047 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.047 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.047 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.047 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.047 21:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.306 [2024-11-19 21:13:34.911268] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:01.306 [2024-11-19 21:13:34.911444] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.306 [2024-11-19 21:13:35.063781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.566 [2024-11-19 21:13:35.199869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.567 [2024-11-19 21:13:35.199973] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.567 [2024-11-19 21:13:35.199999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.567 [2024-11-19 21:13:35.200024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.567 [2024-11-19 21:13:35.200044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.567 [2024-11-19 21:13:35.201821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.133 [2024-11-19 21:13:35.754750] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.133 [2024-11-19 21:13:35.786771] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:02.133 [2024-11-19 21:13:35.787158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.133 21:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.133 21:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:02.133 21:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:02.133 21:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:02.133 21:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.392 21:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.392 21:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3033188 00:24:02.392 21:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3033188 /var/tmp/bdevperf.sock 00:24:02.392 21:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3033188 ']' 00:24:02.392 21:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:02.392 21:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:02.392 21:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:02.392 21:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:02.392 21:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:02.392 "subsystems": [ 00:24:02.392 { 00:24:02.392 "subsystem": "keyring", 00:24:02.392 "config": [ 00:24:02.392 { 00:24:02.392 "method": "keyring_file_add_key", 00:24:02.392 "params": { 00:24:02.392 "name": "key0", 00:24:02.392 "path": "/tmp/tmp.cxjXfAwzeZ" 00:24:02.392 } 00:24:02.392 } 00:24:02.392 ] 00:24:02.392 }, 00:24:02.392 { 00:24:02.392 "subsystem": "iobuf", 00:24:02.392 "config": [ 00:24:02.392 { 00:24:02.392 "method": "iobuf_set_options", 00:24:02.392 "params": { 00:24:02.392 "small_pool_count": 8192, 00:24:02.392 "large_pool_count": 1024, 00:24:02.392 "small_bufsize": 8192, 00:24:02.392 "large_bufsize": 135168, 00:24:02.392 "enable_numa": false 00:24:02.392 } 00:24:02.392 } 00:24:02.392 ] 00:24:02.392 }, 00:24:02.392 { 00:24:02.392 "subsystem": "sock", 00:24:02.392 "config": [ 00:24:02.392 { 00:24:02.392 "method": "sock_set_default_impl", 00:24:02.392 "params": { 00:24:02.392 "impl_name": "posix" 00:24:02.392 } 00:24:02.392 }, 00:24:02.392 { 00:24:02.392 "method": "sock_impl_set_options", 00:24:02.392 "params": { 00:24:02.392 "impl_name": "ssl", 00:24:02.392 "recv_buf_size": 4096, 00:24:02.392 "send_buf_size": 4096, 00:24:02.392 "enable_recv_pipe": true, 00:24:02.392 "enable_quickack": false, 00:24:02.392 "enable_placement_id": 0, 00:24:02.392 "enable_zerocopy_send_server": true, 00:24:02.392 "enable_zerocopy_send_client": false, 00:24:02.392 "zerocopy_threshold": 0, 00:24:02.392 "tls_version": 0, 00:24:02.392 "enable_ktls": false 00:24:02.392 } 00:24:02.392 }, 00:24:02.392 { 00:24:02.392 "method": "sock_impl_set_options", 00:24:02.392 "params": { 00:24:02.392 "impl_name": "posix", 00:24:02.392 "recv_buf_size": 2097152, 00:24:02.392 "send_buf_size": 2097152, 00:24:02.392 "enable_recv_pipe": true, 00:24:02.392 "enable_quickack": false, 00:24:02.392 "enable_placement_id": 0, 00:24:02.392 "enable_zerocopy_send_server": true, 00:24:02.393 "enable_zerocopy_send_client": false, 00:24:02.393 "zerocopy_threshold": 0, 00:24:02.393 "tls_version": 0, 00:24:02.393 "enable_ktls": false 00:24:02.393 } 00:24:02.393 } 00:24:02.393 ] 00:24:02.393 }, 00:24:02.393 { 00:24:02.393 "subsystem": "vmd", 00:24:02.393 "config": [] 00:24:02.393 }, 00:24:02.393 { 00:24:02.393 "subsystem": "accel", 00:24:02.393 "config": [ 00:24:02.393 { 00:24:02.393 "method": "accel_set_options", 00:24:02.393 "params": { 00:24:02.393 "small_cache_size": 128, 00:24:02.393 "large_cache_size": 16, 00:24:02.393 "task_count": 2048, 00:24:02.393 "sequence_count": 2048, 00:24:02.393 "buf_count": 2048 00:24:02.393 } 00:24:02.393 } 00:24:02.393 ] 00:24:02.393 }, 00:24:02.393 { 00:24:02.393 "subsystem": "bdev", 00:24:02.393 "config": [ 00:24:02.393 { 00:24:02.393 "method": "bdev_set_options", 00:24:02.393 "params": { 00:24:02.393 "bdev_io_pool_size": 65535, 00:24:02.393 "bdev_io_cache_size": 256, 00:24:02.393 "bdev_auto_examine": true, 00:24:02.393 "iobuf_small_cache_size": 128, 00:24:02.393 "iobuf_large_cache_size": 16 00:24:02.393 } 00:24:02.393 }, 00:24:02.393 { 00:24:02.393 "method": "bdev_raid_set_options", 00:24:02.393 "params": { 00:24:02.393 "process_window_size_kb": 1024, 00:24:02.393 "process_max_bandwidth_mb_sec": 0 00:24:02.393 } 00:24:02.393 }, 00:24:02.393 { 00:24:02.393 "method": "bdev_iscsi_set_options", 00:24:02.393 "params": { 00:24:02.393 "timeout_sec": 30 00:24:02.393 } 00:24:02.393 }, 00:24:02.393 { 00:24:02.393 "method": "bdev_nvme_set_options", 00:24:02.393 "params": { 00:24:02.393 "action_on_timeout": "none", 00:24:02.393 "timeout_us": 0, 00:24:02.393 "timeout_admin_us": 0, 00:24:02.393 "keep_alive_timeout_ms": 10000, 00:24:02.393 "arbitration_burst": 0, 00:24:02.393 "low_priority_weight": 0, 00:24:02.393 "medium_priority_weight": 0, 00:24:02.393 "high_priority_weight": 0, 00:24:02.393 "nvme_adminq_poll_period_us": 10000, 00:24:02.393 "nvme_ioq_poll_period_us": 0, 00:24:02.393 "io_queue_requests": 512, 00:24:02.393 "delay_cmd_submit": true, 00:24:02.393 "transport_retry_count": 4, 00:24:02.393 "bdev_retry_count": 3, 00:24:02.393 "transport_ack_timeout": 0, 00:24:02.393 "ctrlr_loss_timeout_sec": 0, 00:24:02.393 "reconnect_delay_sec": 0, 00:24:02.393 "fast_io_fail_timeout_sec": 0, 00:24:02.393 "disable_auto_failback": false, 00:24:02.393 "generate_uuids": false, 00:24:02.393 "transport_tos": 0, 00:24:02.393 "nvme_error_stat": false, 00:24:02.393 "rdma_srq_size": 0, 00:24:02.393 "io_path_stat": false, 00:24:02.393 "allow_accel_sequence": false, 00:24:02.393 "rdma_max_cq_size": 0, 00:24:02.393 "rdma_cm_event_timeout_ms": 0Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:02.393 , 00:24:02.393 "dhchap_digests": [ 00:24:02.393 "sha256", 00:24:02.393 "sha384", 00:24:02.393 "sha512" 00:24:02.393 ], 00:24:02.393 "dhchap_dhgroups": [ 00:24:02.393 "null", 00:24:02.393 "ffdhe2048", 00:24:02.393 "ffdhe3072", 00:24:02.393 "ffdhe4096", 00:24:02.393 "ffdhe6144", 00:24:02.393 "ffdhe8192" 00:24:02.393 ] 00:24:02.393 } 00:24:02.393 }, 00:24:02.393 { 00:24:02.393 "method": "bdev_nvme_attach_controller", 00:24:02.393 "params": { 00:24:02.393 "name": "TLSTEST", 00:24:02.393 "trtype": "TCP", 00:24:02.393 "adrfam": "IPv4", 00:24:02.393 "traddr": "10.0.0.2", 00:24:02.393 "trsvcid": "4420", 00:24:02.393 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.393 "prchk_reftag": false, 00:24:02.393 "prchk_guard": false, 00:24:02.393 "ctrlr_loss_timeout_sec": 0, 00:24:02.393 "reconnect_delay_sec": 0, 00:24:02.393 "fast_io_fail_timeout_sec": 0, 00:24:02.393 "psk": "key0", 00:24:02.393 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:02.393 "hdgst": false, 00:24:02.393 "ddgst": false, 00:24:02.393 "multipath": "multipath" 00:24:02.393 } 00:24:02.393 }, 00:24:02.393 { 00:24:02.393 "method": "bdev_nvme_set_hotplug", 00:24:02.393 "params": { 00:24:02.393 "period_us": 100000, 00:24:02.393 "enable": false 00:24:02.393 } 00:24:02.393 }, 00:24:02.393 { 00:24:02.393 "method": "bdev_wait_for_examine" 00:24:02.393 } 00:24:02.393 ] 00:24:02.393 }, 00:24:02.393 { 00:24:02.393 "subsystem": "nbd", 00:24:02.393 "config": [] 00:24:02.393 } 00:24:02.393 ] 00:24:02.393 }' 00:24:02.393 21:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:02.393 21:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.393 [2024-11-19 21:13:36.019666] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:02.393 [2024-11-19 21:13:36.019812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3033188 ] 00:24:02.393 [2024-11-19 21:13:36.155543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.654 [2024-11-19 21:13:36.283393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.913 [2024-11-19 21:13:36.696213] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:03.482 21:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.482 21:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:03.482 21:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:03.482 Running I/O for 10 seconds... 00:24:05.362 2582.00 IOPS, 10.09 MiB/s [2024-11-19T20:13:40.537Z] 2588.00 IOPS, 10.11 MiB/s [2024-11-19T20:13:41.477Z] 2601.33 IOPS, 10.16 MiB/s [2024-11-19T20:13:42.417Z] 2605.25 IOPS, 10.18 MiB/s [2024-11-19T20:13:43.356Z] 2611.80 IOPS, 10.20 MiB/s [2024-11-19T20:13:44.296Z] 2610.83 IOPS, 10.20 MiB/s [2024-11-19T20:13:45.233Z] 2615.00 IOPS, 10.21 MiB/s [2024-11-19T20:13:46.172Z] 2619.75 IOPS, 10.23 MiB/s [2024-11-19T20:13:47.556Z] 2617.56 IOPS, 10.22 MiB/s [2024-11-19T20:13:47.556Z] 2623.10 IOPS, 10.25 MiB/s 00:24:13.761 Latency(us) 00:24:13.761 [2024-11-19T20:13:47.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.761 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:13.761 Verification LBA range: start 0x0 length 0x2000 00:24:13.761 TLSTESTn1 : 10.03 2629.06 10.27 0.00 0.00 48599.55 8543.95 41360.50 00:24:13.761 [2024-11-19T20:13:47.556Z] =================================================================================================================== 00:24:13.761 [2024-11-19T20:13:47.556Z] Total : 2629.06 10.27 0.00 0.00 48599.55 8543.95 41360.50 00:24:13.761 { 00:24:13.761 "results": [ 00:24:13.761 { 00:24:13.761 "job": "TLSTESTn1", 00:24:13.761 "core_mask": "0x4", 00:24:13.761 "workload": "verify", 00:24:13.761 "status": "finished", 00:24:13.761 "verify_range": { 00:24:13.761 "start": 0, 00:24:13.761 "length": 8192 00:24:13.761 }, 00:24:13.761 "queue_depth": 128, 00:24:13.761 "io_size": 4096, 00:24:13.761 "runtime": 10.025648, 00:24:13.761 "iops": 2629.056994620198, 00:24:13.761 "mibps": 10.269753885235149, 00:24:13.761 "io_failed": 0, 00:24:13.761 "io_timeout": 0, 00:24:13.761 "avg_latency_us": 48599.54942009313, 00:24:13.761 "min_latency_us": 8543.952592592592, 00:24:13.761 "max_latency_us": 41360.497777777775 00:24:13.761 } 00:24:13.761 ], 00:24:13.761 "core_count": 1 00:24:13.761 } 00:24:13.761 21:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:13.761 21:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3033188 00:24:13.761 21:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3033188 ']' 00:24:13.761 21:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3033188 00:24:13.761 21:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:13.761 21:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.761 21:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3033188 00:24:13.761 21:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:13.761 21:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:13.761 21:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3033188' 00:24:13.761 killing process with pid 3033188 00:24:13.761 21:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3033188 00:24:13.761 Received shutdown signal, test time was about 10.000000 seconds 00:24:13.761 00:24:13.761 Latency(us) 00:24:13.761 [2024-11-19T20:13:47.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.761 [2024-11-19T20:13:47.556Z] =================================================================================================================== 00:24:13.761 [2024-11-19T20:13:47.556Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:13.761 21:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3033188 00:24:14.331 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3033040 00:24:14.331 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3033040 ']' 00:24:14.331 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3033040 00:24:14.331 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:14.331 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.331 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3033040 00:24:14.331 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:14.331 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:14.331 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3033040' 00:24:14.331 killing process with pid 3033040 00:24:14.331 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3033040 00:24:14.331 21:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3033040 00:24:15.711 21:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:15.711 21:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:15.711 21:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:15.711 21:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.711 21:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3034658 00:24:15.711 21:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:15.711 21:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3034658 00:24:15.712 21:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3034658 ']' 00:24:15.712 21:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.712 21:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:15.712 21:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.712 21:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:15.712 21:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.712 [2024-11-19 21:13:49.470946] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:15.712 [2024-11-19 21:13:49.471117] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.969 [2024-11-19 21:13:49.623621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.969 [2024-11-19 21:13:49.761412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.969 [2024-11-19 21:13:49.761497] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.969 [2024-11-19 21:13:49.761535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.969 [2024-11-19 21:13:49.761559] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.969 [2024-11-19 21:13:49.761579] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.969 [2024-11-19 21:13:49.763272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.904 21:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:16.904 21:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:16.904 21:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:16.904 21:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:16.904 21:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.904 21:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.904 21:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.cxjXfAwzeZ 00:24:16.904 21:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.cxjXfAwzeZ 00:24:16.904 21:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:17.162 [2024-11-19 21:13:50.704481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.162 21:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:17.420 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:17.678 [2024-11-19 21:13:51.266045] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:17.678 [2024-11-19 21:13:51.266421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.678 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:17.936 malloc0 00:24:17.936 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:18.194 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.cxjXfAwzeZ 00:24:18.452 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:18.710 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3035073 00:24:18.710 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:18.710 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:18.710 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3035073 /var/tmp/bdevperf.sock 00:24:18.710 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3035073 ']' 00:24:18.710 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:18.710 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:18.710 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:18.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:18.711 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:18.711 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.711 [2024-11-19 21:13:52.463432] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:18.711 [2024-11-19 21:13:52.463578] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035073 ] 00:24:18.971 [2024-11-19 21:13:52.605429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.971 [2024-11-19 21:13:52.740834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.905 21:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:19.905 21:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:19.905 21:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cxjXfAwzeZ 00:24:19.905 21:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:20.165 [2024-11-19 21:13:53.931301] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:20.424 nvme0n1 00:24:20.424 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:20.424 Running I/O for 1 seconds... 00:24:21.803 2449.00 IOPS, 9.57 MiB/s 00:24:21.803 Latency(us) 00:24:21.803 [2024-11-19T20:13:55.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.803 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:21.803 Verification LBA range: start 0x0 length 0x2000 00:24:21.803 nvme0n1 : 1.03 2504.38 9.78 0.00 0.00 50481.93 10582.85 44467.39 00:24:21.803 [2024-11-19T20:13:55.598Z] =================================================================================================================== 00:24:21.803 [2024-11-19T20:13:55.598Z] Total : 2504.38 9.78 0.00 0.00 50481.93 10582.85 44467.39 00:24:21.803 { 00:24:21.803 "results": [ 00:24:21.803 { 00:24:21.803 "job": "nvme0n1", 00:24:21.803 "core_mask": "0x2", 00:24:21.803 "workload": "verify", 00:24:21.803 "status": "finished", 00:24:21.803 "verify_range": { 00:24:21.803 "start": 0, 00:24:21.803 "length": 8192 00:24:21.803 }, 00:24:21.803 "queue_depth": 128, 00:24:21.803 "io_size": 4096, 00:24:21.803 "runtime": 1.028997, 00:24:21.803 "iops": 2504.38047924338, 00:24:21.803 "mibps": 9.782736247044452, 00:24:21.803 "io_failed": 0, 00:24:21.803 "io_timeout": 0, 00:24:21.803 "avg_latency_us": 50481.9346251024, 00:24:21.803 "min_latency_us": 10582.85037037037, 00:24:21.803 "max_latency_us": 44467.38962962963 00:24:21.803 } 00:24:21.803 ], 00:24:21.803 "core_count": 1 00:24:21.803 } 00:24:21.803 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3035073 00:24:21.803 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3035073 ']' 00:24:21.803 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3035073 00:24:21.803 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:21.803 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.803 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3035073 00:24:21.803 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:21.803 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:21.803 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3035073' 00:24:21.803 killing process with pid 3035073 00:24:21.803 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3035073 00:24:21.803 Received shutdown signal, test time was about 1.000000 seconds 00:24:21.803 00:24:21.803 Latency(us) 00:24:21.803 [2024-11-19T20:13:55.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.803 [2024-11-19T20:13:55.598Z] =================================================================================================================== 00:24:21.803 [2024-11-19T20:13:55.598Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.803 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3035073 00:24:22.373 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3034658 00:24:22.373 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3034658 ']' 00:24:22.373 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3034658 00:24:22.373 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:22.373 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.373 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3034658 00:24:22.373 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:22.373 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:22.373 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3034658' 00:24:22.373 killing process with pid 3034658 00:24:22.373 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3034658 00:24:22.373 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3034658 00:24:23.750 21:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:23.750 21:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:23.750 21:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:23.750 21:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.750 21:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3035622 00:24:23.750 21:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:23.750 21:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3035622 00:24:23.750 21:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3035622 ']' 00:24:23.750 21:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.750 21:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.750 21:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.750 21:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.750 21:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.750 [2024-11-19 21:13:57.404179] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:23.750 [2024-11-19 21:13:57.404317] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.035 [2024-11-19 21:13:57.550414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.035 [2024-11-19 21:13:57.685792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.035 [2024-11-19 21:13:57.685877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.035 [2024-11-19 21:13:57.685903] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.035 [2024-11-19 21:13:57.685927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.035 [2024-11-19 21:13:57.685946] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.035 [2024-11-19 21:13:57.687563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.625 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:24.625 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:24.625 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:24.625 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:24.625 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.883 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.883 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:24.883 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.883 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.883 [2024-11-19 21:13:58.429874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.883 malloc0 00:24:24.883 [2024-11-19 21:13:58.493092] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:24.883 [2024-11-19 21:13:58.493497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.883 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.883 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3035774 00:24:24.883 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:24.883 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3035774 /var/tmp/bdevperf.sock 00:24:24.883 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3035774 ']' 00:24:24.883 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:24.883 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.883 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:24.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:24.883 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.883 21:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.883 [2024-11-19 21:13:58.603274] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:24.883 [2024-11-19 21:13:58.603465] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035774 ] 00:24:25.141 [2024-11-19 21:13:58.755455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.141 [2024-11-19 21:13:58.892819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.074 21:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.074 21:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:26.074 21:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cxjXfAwzeZ 00:24:26.074 21:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:26.332 [2024-11-19 21:14:00.098929] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:26.589 nvme0n1 00:24:26.590 21:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:26.590 Running I/O for 1 seconds... 00:24:27.781 2392.00 IOPS, 9.34 MiB/s 00:24:27.781 Latency(us) 00:24:27.781 [2024-11-19T20:14:01.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.781 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:27.781 Verification LBA range: start 0x0 length 0x2000 00:24:27.781 nvme0n1 : 1.03 2449.50 9.57 0.00 0.00 51631.80 10534.31 52428.80 00:24:27.781 [2024-11-19T20:14:01.576Z] =================================================================================================================== 00:24:27.781 [2024-11-19T20:14:01.576Z] Total : 2449.50 9.57 0.00 0.00 51631.80 10534.31 52428.80 00:24:27.781 { 00:24:27.781 "results": [ 00:24:27.781 { 00:24:27.781 "job": "nvme0n1", 00:24:27.781 "core_mask": "0x2", 00:24:27.781 "workload": "verify", 00:24:27.781 "status": "finished", 00:24:27.781 "verify_range": { 00:24:27.781 "start": 0, 00:24:27.781 "length": 8192 00:24:27.781 }, 00:24:27.781 "queue_depth": 128, 00:24:27.781 "io_size": 4096, 00:24:27.781 "runtime": 1.028783, 00:24:27.781 "iops": 2449.4961522497942, 00:24:27.781 "mibps": 9.568344344725759, 00:24:27.781 "io_failed": 0, 00:24:27.781 "io_timeout": 0, 00:24:27.781 "avg_latency_us": 51631.80171663727, 00:24:27.781 "min_latency_us": 10534.305185185185, 00:24:27.781 "max_latency_us": 52428.8 00:24:27.781 } 00:24:27.781 ], 00:24:27.781 "core_count": 1 00:24:27.781 } 00:24:27.781 21:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:27.781 21:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.781 21:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:27.781 21:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.781 21:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:27.781 "subsystems": [ 00:24:27.781 { 00:24:27.781 "subsystem": "keyring", 00:24:27.781 "config": [ 00:24:27.781 { 00:24:27.781 "method": "keyring_file_add_key", 00:24:27.781 "params": { 00:24:27.781 "name": "key0", 00:24:27.781 "path": "/tmp/tmp.cxjXfAwzeZ" 00:24:27.781 } 00:24:27.781 } 00:24:27.781 ] 00:24:27.781 }, 00:24:27.781 { 00:24:27.781 "subsystem": "iobuf", 00:24:27.781 "config": [ 00:24:27.781 { 00:24:27.781 "method": "iobuf_set_options", 00:24:27.781 "params": { 00:24:27.781 "small_pool_count": 8192, 00:24:27.781 "large_pool_count": 1024, 00:24:27.781 "small_bufsize": 8192, 00:24:27.781 "large_bufsize": 135168, 00:24:27.781 "enable_numa": false 00:24:27.781 } 00:24:27.781 } 00:24:27.781 ] 00:24:27.781 }, 00:24:27.781 { 00:24:27.781 "subsystem": "sock", 00:24:27.781 "config": [ 00:24:27.781 { 00:24:27.781 "method": "sock_set_default_impl", 00:24:27.782 "params": { 00:24:27.782 "impl_name": "posix" 00:24:27.782 } 00:24:27.782 }, 00:24:27.782 { 00:24:27.782 "method": "sock_impl_set_options", 00:24:27.782 "params": { 00:24:27.782 "impl_name": "ssl", 00:24:27.782 "recv_buf_size": 4096, 00:24:27.782 "send_buf_size": 4096, 00:24:27.782 "enable_recv_pipe": true, 00:24:27.782 "enable_quickack": false, 00:24:27.782 "enable_placement_id": 0, 00:24:27.782 "enable_zerocopy_send_server": true, 00:24:27.782 "enable_zerocopy_send_client": false, 00:24:27.782 "zerocopy_threshold": 0, 00:24:27.782 "tls_version": 0, 00:24:27.782 "enable_ktls": false 00:24:27.782 } 00:24:27.782 }, 00:24:27.782 { 00:24:27.782 "method": "sock_impl_set_options", 00:24:27.782 "params": { 00:24:27.782 "impl_name": "posix", 00:24:27.782 "recv_buf_size": 2097152, 00:24:27.782 "send_buf_size": 2097152, 00:24:27.782 "enable_recv_pipe": true, 00:24:27.782 "enable_quickack": false, 00:24:27.782 "enable_placement_id": 0, 00:24:27.782 "enable_zerocopy_send_server": true, 00:24:27.782 "enable_zerocopy_send_client": false, 00:24:27.782 "zerocopy_threshold": 0, 00:24:27.782 "tls_version": 0, 00:24:27.782 "enable_ktls": false 00:24:27.782 } 00:24:27.782 } 00:24:27.782 ] 00:24:27.782 }, 00:24:27.782 { 00:24:27.782 "subsystem": "vmd", 00:24:27.782 "config": [] 00:24:27.782 }, 00:24:27.782 { 00:24:27.782 "subsystem": "accel", 00:24:27.782 "config": [ 00:24:27.782 { 00:24:27.782 "method": "accel_set_options", 00:24:27.782 "params": { 00:24:27.782 "small_cache_size": 128, 00:24:27.782 "large_cache_size": 16, 00:24:27.782 "task_count": 2048, 00:24:27.782 "sequence_count": 2048, 00:24:27.782 "buf_count": 2048 00:24:27.782 } 00:24:27.782 } 00:24:27.782 ] 00:24:27.782 }, 00:24:27.782 { 00:24:27.782 "subsystem": "bdev", 00:24:27.782 "config": [ 00:24:27.782 { 00:24:27.782 "method": "bdev_set_options", 00:24:27.782 "params": { 00:24:27.782 "bdev_io_pool_size": 65535, 00:24:27.782 "bdev_io_cache_size": 256, 00:24:27.782 "bdev_auto_examine": true, 00:24:27.782 "iobuf_small_cache_size": 128, 00:24:27.782 "iobuf_large_cache_size": 16 00:24:27.782 } 00:24:27.782 }, 00:24:27.782 { 00:24:27.782 "method": "bdev_raid_set_options", 00:24:27.782 "params": { 00:24:27.782 "process_window_size_kb": 1024, 00:24:27.782 "process_max_bandwidth_mb_sec": 0 00:24:27.782 } 00:24:27.782 }, 00:24:27.782 { 00:24:27.782 "method": "bdev_iscsi_set_options", 00:24:27.782 "params": { 00:24:27.782 "timeout_sec": 30 00:24:27.782 } 00:24:27.782 }, 00:24:27.782 { 00:24:27.782 "method": "bdev_nvme_set_options", 00:24:27.782 "params": { 00:24:27.782 "action_on_timeout": "none", 00:24:27.782 "timeout_us": 0, 00:24:27.782 "timeout_admin_us": 0, 00:24:27.782 "keep_alive_timeout_ms": 10000, 00:24:27.782 "arbitration_burst": 0, 00:24:27.782 "low_priority_weight": 0, 00:24:27.782 "medium_priority_weight": 0, 00:24:27.782 "high_priority_weight": 0, 00:24:27.782 "nvme_adminq_poll_period_us": 10000, 00:24:27.782 "nvme_ioq_poll_period_us": 0, 00:24:27.782 "io_queue_requests": 0, 00:24:27.782 "delay_cmd_submit": true, 00:24:27.782 "transport_retry_count": 4, 00:24:27.782 "bdev_retry_count": 3, 00:24:27.782 "transport_ack_timeout": 0, 00:24:27.782 "ctrlr_loss_timeout_sec": 0, 00:24:27.782 "reconnect_delay_sec": 0, 00:24:27.782 "fast_io_fail_timeout_sec": 0, 00:24:27.782 "disable_auto_failback": false, 00:24:27.782 "generate_uuids": false, 00:24:27.782 "transport_tos": 0, 00:24:27.782 "nvme_error_stat": false, 00:24:27.782 "rdma_srq_size": 0, 00:24:27.782 "io_path_stat": false, 00:24:27.782 "allow_accel_sequence": false, 00:24:27.782 "rdma_max_cq_size": 0, 00:24:27.782 "rdma_cm_event_timeout_ms": 0, 00:24:27.782 "dhchap_digests": [ 00:24:27.782 "sha256", 00:24:27.782 "sha384", 00:24:27.782 "sha512" 00:24:27.782 ], 00:24:27.782 "dhchap_dhgroups": [ 00:24:27.782 "null", 00:24:27.782 "ffdhe2048", 00:24:27.782 "ffdhe3072", 00:24:27.782 "ffdhe4096", 00:24:27.782 "ffdhe6144", 00:24:27.782 "ffdhe8192" 00:24:27.782 ] 00:24:27.782 } 00:24:27.782 }, 00:24:27.782 { 00:24:27.782 "method": "bdev_nvme_set_hotplug", 00:24:27.782 "params": { 00:24:27.782 "period_us": 100000, 00:24:27.782 "enable": false 00:24:27.782 } 00:24:27.782 }, 00:24:27.782 { 00:24:27.782 "method": "bdev_malloc_create", 00:24:27.782 "params": { 00:24:27.782 "name": "malloc0", 00:24:27.782 "num_blocks": 8192, 00:24:27.782 "block_size": 4096, 00:24:27.782 "physical_block_size": 4096, 00:24:27.782 "uuid": "e7fbe78b-1b92-4635-9338-000a6ba6231d", 00:24:27.782 "optimal_io_boundary": 0, 00:24:27.782 "md_size": 0, 00:24:27.782 "dif_type": 0, 00:24:27.782 "dif_is_head_of_md": false, 00:24:27.782 "dif_pi_format": 0 00:24:27.782 } 00:24:27.782 }, 00:24:27.782 { 00:24:27.782 "method": "bdev_wait_for_examine" 00:24:27.782 } 00:24:27.782 ] 00:24:27.782 }, 00:24:27.782 { 00:24:27.782 "subsystem": "nbd", 00:24:27.782 "config": [] 00:24:27.782 }, 00:24:27.782 { 00:24:27.782 "subsystem": "scheduler", 00:24:27.782 "config": [ 00:24:27.782 { 00:24:27.782 "method": "framework_set_scheduler", 00:24:27.782 "params": { 00:24:27.782 "name": "static" 00:24:27.782 } 00:24:27.782 } 00:24:27.782 ] 00:24:27.782 }, 00:24:27.782 { 00:24:27.782 "subsystem": "nvmf", 00:24:27.782 "config": [ 00:24:27.782 { 00:24:27.782 "method": "nvmf_set_config", 00:24:27.782 "params": { 00:24:27.782 "discovery_filter": "match_any", 00:24:27.782 "admin_cmd_passthru": { 00:24:27.782 "identify_ctrlr": false 00:24:27.782 }, 00:24:27.782 "dhchap_digests": [ 00:24:27.782 "sha256", 00:24:27.782 "sha384", 00:24:27.782 "sha512" 00:24:27.782 ], 00:24:27.782 "dhchap_dhgroups": [ 00:24:27.782 "null", 00:24:27.782 "ffdhe2048", 00:24:27.782 "ffdhe3072", 00:24:27.782 "ffdhe4096", 00:24:27.782 "ffdhe6144", 00:24:27.782 "ffdhe8192" 00:24:27.782 ] 00:24:27.782 } 00:24:27.782 }, 00:24:27.782 { 00:24:27.782 "method": "nvmf_set_max_subsystems", 00:24:27.782 "params": { 00:24:27.782 "max_subsystems": 1024 00:24:27.782 } 00:24:27.782 }, 00:24:27.782 { 00:24:27.782 "method": "nvmf_set_crdt", 00:24:27.782 "params": { 00:24:27.782 "crdt1": 0, 00:24:27.782 "crdt2": 0, 00:24:27.782 "crdt3": 0 00:24:27.782 } 00:24:27.782 }, 00:24:27.782 { 00:24:27.782 "method": "nvmf_create_transport", 00:24:27.782 "params": { 00:24:27.782 "trtype": "TCP", 00:24:27.782 "max_queue_depth": 128, 00:24:27.782 "max_io_qpairs_per_ctrlr": 127, 00:24:27.782 "in_capsule_data_size": 4096, 00:24:27.782 "max_io_size": 131072, 00:24:27.782 "io_unit_size": 131072, 00:24:27.782 "max_aq_depth": 128, 00:24:27.782 "num_shared_buffers": 511, 00:24:27.782 "buf_cache_size": 4294967295, 00:24:27.782 "dif_insert_or_strip": false, 00:24:27.782 "zcopy": false, 00:24:27.782 "c2h_success": false, 00:24:27.782 "sock_priority": 0, 00:24:27.782 "abort_timeout_sec": 1, 00:24:27.782 "ack_timeout": 0, 00:24:27.782 "data_wr_pool_size": 0 00:24:27.782 } 00:24:27.782 }, 00:24:27.782 { 00:24:27.782 "method": "nvmf_create_subsystem", 00:24:27.782 "params": { 00:24:27.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.782 "allow_any_host": false, 00:24:27.782 "serial_number": "00000000000000000000", 00:24:27.782 "model_number": "SPDK bdev Controller", 00:24:27.782 "max_namespaces": 32, 00:24:27.782 "min_cntlid": 1, 00:24:27.782 "max_cntlid": 65519, 00:24:27.782 "ana_reporting": false 00:24:27.782 } 00:24:27.782 }, 00:24:27.782 { 00:24:27.782 "method": "nvmf_subsystem_add_host", 00:24:27.782 "params": { 00:24:27.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.782 "host": "nqn.2016-06.io.spdk:host1", 00:24:27.782 "psk": "key0" 00:24:27.782 } 00:24:27.782 }, 00:24:27.782 { 00:24:27.782 "method": "nvmf_subsystem_add_ns", 00:24:27.782 "params": { 00:24:27.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.782 "namespace": { 00:24:27.782 "nsid": 1, 00:24:27.782 "bdev_name": "malloc0", 00:24:27.782 "nguid": "E7FBE78B1B9246359338000A6BA6231D", 00:24:27.782 "uuid": "e7fbe78b-1b92-4635-9338-000a6ba6231d", 00:24:27.782 "no_auto_visible": false 00:24:27.782 } 00:24:27.782 } 00:24:27.782 }, 00:24:27.782 { 00:24:27.782 "method": "nvmf_subsystem_add_listener", 00:24:27.782 "params": { 00:24:27.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.782 "listen_address": { 00:24:27.782 "trtype": "TCP", 00:24:27.782 "adrfam": "IPv4", 00:24:27.782 "traddr": "10.0.0.2", 00:24:27.782 "trsvcid": "4420" 00:24:27.782 }, 00:24:27.782 "secure_channel": false, 00:24:27.782 "sock_impl": "ssl" 00:24:27.782 } 00:24:27.782 } 00:24:27.782 ] 00:24:27.782 } 00:24:27.782 ] 00:24:27.783 }' 00:24:27.783 21:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:28.041 21:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:28.041 "subsystems": [ 00:24:28.041 { 00:24:28.041 "subsystem": "keyring", 00:24:28.041 "config": [ 00:24:28.041 { 00:24:28.041 "method": "keyring_file_add_key", 00:24:28.041 "params": { 00:24:28.041 "name": "key0", 00:24:28.041 "path": "/tmp/tmp.cxjXfAwzeZ" 00:24:28.041 } 00:24:28.041 } 00:24:28.041 ] 00:24:28.041 }, 00:24:28.041 { 00:24:28.041 "subsystem": "iobuf", 00:24:28.041 "config": [ 00:24:28.041 { 00:24:28.041 "method": "iobuf_set_options", 00:24:28.041 "params": { 00:24:28.041 "small_pool_count": 8192, 00:24:28.041 "large_pool_count": 1024, 00:24:28.041 "small_bufsize": 8192, 00:24:28.041 "large_bufsize": 135168, 00:24:28.041 "enable_numa": false 00:24:28.041 } 00:24:28.041 } 00:24:28.041 ] 00:24:28.041 }, 00:24:28.041 { 00:24:28.041 "subsystem": "sock", 00:24:28.041 "config": [ 00:24:28.041 { 00:24:28.041 "method": "sock_set_default_impl", 00:24:28.041 "params": { 00:24:28.041 "impl_name": "posix" 00:24:28.041 } 00:24:28.041 }, 00:24:28.041 { 00:24:28.041 "method": "sock_impl_set_options", 00:24:28.041 "params": { 00:24:28.041 "impl_name": "ssl", 00:24:28.041 "recv_buf_size": 4096, 00:24:28.041 "send_buf_size": 4096, 00:24:28.041 "enable_recv_pipe": true, 00:24:28.041 "enable_quickack": false, 00:24:28.041 "enable_placement_id": 0, 00:24:28.041 "enable_zerocopy_send_server": true, 00:24:28.041 "enable_zerocopy_send_client": false, 00:24:28.041 "zerocopy_threshold": 0, 00:24:28.041 "tls_version": 0, 00:24:28.041 "enable_ktls": false 00:24:28.041 } 00:24:28.041 }, 00:24:28.041 { 00:24:28.041 "method": "sock_impl_set_options", 00:24:28.041 "params": { 00:24:28.041 "impl_name": "posix", 00:24:28.041 "recv_buf_size": 2097152, 00:24:28.041 "send_buf_size": 2097152, 00:24:28.041 "enable_recv_pipe": true, 00:24:28.041 "enable_quickack": false, 00:24:28.041 "enable_placement_id": 0, 00:24:28.041 "enable_zerocopy_send_server": true, 00:24:28.041 "enable_zerocopy_send_client": false, 00:24:28.041 "zerocopy_threshold": 0, 00:24:28.041 "tls_version": 0, 00:24:28.041 "enable_ktls": false 00:24:28.041 } 00:24:28.041 } 00:24:28.041 ] 00:24:28.041 }, 00:24:28.041 { 00:24:28.041 "subsystem": "vmd", 00:24:28.041 "config": [] 00:24:28.041 }, 00:24:28.041 { 00:24:28.041 "subsystem": "accel", 00:24:28.041 "config": [ 00:24:28.041 { 00:24:28.041 "method": "accel_set_options", 00:24:28.041 "params": { 00:24:28.041 "small_cache_size": 128, 00:24:28.041 "large_cache_size": 16, 00:24:28.041 "task_count": 2048, 00:24:28.041 "sequence_count": 2048, 00:24:28.041 "buf_count": 2048 00:24:28.041 } 00:24:28.042 } 00:24:28.042 ] 00:24:28.042 }, 00:24:28.042 { 00:24:28.042 "subsystem": "bdev", 00:24:28.042 "config": [ 00:24:28.042 { 00:24:28.042 "method": "bdev_set_options", 00:24:28.042 "params": { 00:24:28.042 "bdev_io_pool_size": 65535, 00:24:28.042 "bdev_io_cache_size": 256, 00:24:28.042 "bdev_auto_examine": true, 00:24:28.042 "iobuf_small_cache_size": 128, 00:24:28.042 "iobuf_large_cache_size": 16 00:24:28.042 } 00:24:28.042 }, 00:24:28.042 { 00:24:28.042 "method": "bdev_raid_set_options", 00:24:28.042 "params": { 00:24:28.042 "process_window_size_kb": 1024, 00:24:28.042 "process_max_bandwidth_mb_sec": 0 00:24:28.042 } 00:24:28.042 }, 00:24:28.042 { 00:24:28.042 "method": "bdev_iscsi_set_options", 00:24:28.042 "params": { 00:24:28.042 "timeout_sec": 30 00:24:28.042 } 00:24:28.042 }, 00:24:28.042 { 00:24:28.042 "method": "bdev_nvme_set_options", 00:24:28.042 "params": { 00:24:28.042 "action_on_timeout": "none", 00:24:28.042 "timeout_us": 0, 00:24:28.042 "timeout_admin_us": 0, 00:24:28.042 "keep_alive_timeout_ms": 10000, 00:24:28.042 "arbitration_burst": 0, 00:24:28.042 "low_priority_weight": 0, 00:24:28.042 "medium_priority_weight": 0, 00:24:28.042 "high_priority_weight": 0, 00:24:28.042 "nvme_adminq_poll_period_us": 10000, 00:24:28.042 "nvme_ioq_poll_period_us": 0, 00:24:28.042 "io_queue_requests": 512, 00:24:28.042 "delay_cmd_submit": true, 00:24:28.042 "transport_retry_count": 4, 00:24:28.042 "bdev_retry_count": 3, 00:24:28.042 "transport_ack_timeout": 0, 00:24:28.042 "ctrlr_loss_timeout_sec": 0, 00:24:28.042 "reconnect_delay_sec": 0, 00:24:28.042 "fast_io_fail_timeout_sec": 0, 00:24:28.042 "disable_auto_failback": false, 00:24:28.042 "generate_uuids": false, 00:24:28.042 "transport_tos": 0, 00:24:28.042 "nvme_error_stat": false, 00:24:28.042 "rdma_srq_size": 0, 00:24:28.042 "io_path_stat": false, 00:24:28.042 "allow_accel_sequence": false, 00:24:28.042 "rdma_max_cq_size": 0, 00:24:28.042 "rdma_cm_event_timeout_ms": 0, 00:24:28.042 "dhchap_digests": [ 00:24:28.042 "sha256", 00:24:28.042 "sha384", 00:24:28.042 "sha512" 00:24:28.042 ], 00:24:28.042 "dhchap_dhgroups": [ 00:24:28.042 "null", 00:24:28.042 "ffdhe2048", 00:24:28.042 "ffdhe3072", 00:24:28.042 "ffdhe4096", 00:24:28.042 "ffdhe6144", 00:24:28.042 "ffdhe8192" 00:24:28.042 ] 00:24:28.042 } 00:24:28.042 }, 00:24:28.042 { 00:24:28.042 "method": "bdev_nvme_attach_controller", 00:24:28.042 "params": { 00:24:28.042 "name": "nvme0", 00:24:28.042 "trtype": "TCP", 00:24:28.042 "adrfam": "IPv4", 00:24:28.042 "traddr": "10.0.0.2", 00:24:28.042 "trsvcid": "4420", 00:24:28.042 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.042 "prchk_reftag": false, 00:24:28.042 "prchk_guard": false, 00:24:28.042 "ctrlr_loss_timeout_sec": 0, 00:24:28.042 "reconnect_delay_sec": 0, 00:24:28.042 "fast_io_fail_timeout_sec": 0, 00:24:28.042 "psk": "key0", 00:24:28.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:28.042 "hdgst": false, 00:24:28.042 "ddgst": false, 00:24:28.042 "multipath": "multipath" 00:24:28.042 } 00:24:28.042 }, 00:24:28.042 { 00:24:28.042 "method": "bdev_nvme_set_hotplug", 00:24:28.042 "params": { 00:24:28.042 "period_us": 100000, 00:24:28.042 "enable": false 00:24:28.042 } 00:24:28.042 }, 00:24:28.042 { 00:24:28.042 "method": "bdev_enable_histogram", 00:24:28.042 "params": { 00:24:28.042 "name": "nvme0n1", 00:24:28.042 "enable": true 00:24:28.042 } 00:24:28.042 }, 00:24:28.042 { 00:24:28.042 "method": "bdev_wait_for_examine" 00:24:28.042 } 00:24:28.042 ] 00:24:28.042 }, 00:24:28.042 { 00:24:28.042 "subsystem": "nbd", 00:24:28.042 "config": [] 00:24:28.042 } 00:24:28.042 ] 00:24:28.042 }' 00:24:28.042 21:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3035774 00:24:28.042 21:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3035774 ']' 00:24:28.042 21:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3035774 00:24:28.042 21:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:28.042 21:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.042 21:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3035774 00:24:28.302 21:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:28.302 21:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:28.302 21:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3035774' 00:24:28.302 killing process with pid 3035774 00:24:28.302 21:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3035774 00:24:28.302 Received shutdown signal, test time was about 1.000000 seconds 00:24:28.302 00:24:28.302 Latency(us) 00:24:28.302 [2024-11-19T20:14:02.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.302 [2024-11-19T20:14:02.097Z] =================================================================================================================== 00:24:28.302 [2024-11-19T20:14:02.097Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:28.302 21:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3035774 00:24:29.233 21:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3035622 00:24:29.233 21:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3035622 ']' 00:24:29.233 21:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3035622 00:24:29.233 21:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:29.233 21:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.233 21:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3035622 00:24:29.233 21:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:29.233 21:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:29.233 21:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3035622' 00:24:29.233 killing process with pid 3035622 00:24:29.233 21:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3035622 00:24:29.233 21:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3035622 00:24:30.165 21:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:30.165 21:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:30.165 21:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:30.165 "subsystems": [ 00:24:30.165 { 00:24:30.165 "subsystem": "keyring", 00:24:30.165 "config": [ 00:24:30.165 { 00:24:30.165 "method": "keyring_file_add_key", 00:24:30.165 "params": { 00:24:30.165 "name": "key0", 00:24:30.165 "path": "/tmp/tmp.cxjXfAwzeZ" 00:24:30.165 } 00:24:30.165 } 00:24:30.165 ] 00:24:30.165 }, 00:24:30.165 { 00:24:30.165 "subsystem": "iobuf", 00:24:30.165 "config": [ 00:24:30.165 { 00:24:30.165 "method": "iobuf_set_options", 00:24:30.165 "params": { 00:24:30.165 "small_pool_count": 8192, 00:24:30.165 "large_pool_count": 1024, 00:24:30.165 "small_bufsize": 8192, 00:24:30.165 "large_bufsize": 135168, 00:24:30.165 "enable_numa": false 00:24:30.165 } 00:24:30.165 } 00:24:30.165 ] 00:24:30.165 }, 00:24:30.165 { 00:24:30.165 "subsystem": "sock", 00:24:30.165 "config": [ 00:24:30.165 { 00:24:30.165 "method": "sock_set_default_impl", 00:24:30.165 "params": { 00:24:30.165 "impl_name": "posix" 00:24:30.165 } 00:24:30.165 }, 00:24:30.165 { 00:24:30.165 "method": "sock_impl_set_options", 00:24:30.165 "params": { 00:24:30.165 "impl_name": "ssl", 00:24:30.165 "recv_buf_size": 4096, 00:24:30.165 "send_buf_size": 4096, 00:24:30.165 "enable_recv_pipe": true, 00:24:30.165 "enable_quickack": false, 00:24:30.165 "enable_placement_id": 0, 00:24:30.165 "enable_zerocopy_send_server": true, 00:24:30.165 "enable_zerocopy_send_client": false, 00:24:30.165 "zerocopy_threshold": 0, 00:24:30.165 "tls_version": 0, 00:24:30.165 "enable_ktls": false 00:24:30.165 } 00:24:30.165 }, 00:24:30.165 { 00:24:30.165 "method": "sock_impl_set_options", 00:24:30.165 "params": { 00:24:30.165 "impl_name": "posix", 00:24:30.165 "recv_buf_size": 2097152, 00:24:30.165 "send_buf_size": 2097152, 00:24:30.165 "enable_recv_pipe": true, 00:24:30.165 "enable_quickack": false, 00:24:30.165 "enable_placement_id": 0, 00:24:30.165 "enable_zerocopy_send_server": true, 00:24:30.165 "enable_zerocopy_send_client": false, 00:24:30.165 "zerocopy_threshold": 0, 00:24:30.165 "tls_version": 0, 00:24:30.165 "enable_ktls": false 00:24:30.165 } 00:24:30.165 } 00:24:30.165 ] 00:24:30.165 }, 00:24:30.165 { 00:24:30.165 "subsystem": "vmd", 00:24:30.165 "config": [] 00:24:30.165 }, 00:24:30.165 { 00:24:30.165 "subsystem": "accel", 00:24:30.165 "config": [ 00:24:30.165 { 00:24:30.165 "method": "accel_set_options", 00:24:30.165 "params": { 00:24:30.165 "small_cache_size": 128, 00:24:30.165 "large_cache_size": 16, 00:24:30.165 "task_count": 2048, 00:24:30.165 "sequence_count": 2048, 00:24:30.165 "buf_count": 2048 00:24:30.165 } 00:24:30.165 } 00:24:30.165 ] 00:24:30.165 }, 00:24:30.165 { 00:24:30.165 "subsystem": "bdev", 00:24:30.165 "config": [ 00:24:30.165 { 00:24:30.165 "method": "bdev_set_options", 00:24:30.165 "params": { 00:24:30.165 "bdev_io_pool_size": 65535, 00:24:30.165 "bdev_io_cache_size": 256, 00:24:30.165 "bdev_auto_examine": true, 00:24:30.165 "iobuf_small_cache_size": 128, 00:24:30.165 "iobuf_large_cache_size": 16 00:24:30.165 } 00:24:30.165 }, 00:24:30.165 { 00:24:30.165 "method": "bdev_raid_set_options", 00:24:30.165 "params": { 00:24:30.165 "process_window_size_kb": 1024, 00:24:30.165 "process_max_bandwidth_mb_sec": 0 00:24:30.165 } 00:24:30.165 }, 00:24:30.165 { 00:24:30.165 "method": "bdev_iscsi_set_options", 00:24:30.165 "params": { 00:24:30.165 "timeout_sec": 30 00:24:30.165 } 00:24:30.165 }, 00:24:30.165 { 00:24:30.165 "method": "bdev_nvme_set_options", 00:24:30.165 "params": { 00:24:30.165 "action_on_timeout": "none", 00:24:30.165 "timeout_us": 0, 00:24:30.165 "timeout_admin_us": 0, 00:24:30.165 "keep_alive_timeout_ms": 10000, 00:24:30.165 "arbitration_burst": 0, 00:24:30.165 "low_priority_weight": 0, 00:24:30.165 "medium_priority_weight": 0, 00:24:30.165 "high_priority_weight": 0, 00:24:30.165 "nvme_adminq_poll_period_us": 10000, 00:24:30.165 "nvme_ioq_poll_period_us": 0, 00:24:30.165 "io_queue_requests": 0, 00:24:30.165 "delay_cmd_submit": true, 00:24:30.165 "transport_retry_count": 4, 00:24:30.165 "bdev_retry_count": 3, 00:24:30.165 "transport_ack_timeout": 0, 00:24:30.165 "ctrlr_loss_timeout_sec": 0, 00:24:30.165 "reconnect_delay_sec": 0, 00:24:30.165 "fast_io_fail_timeout_sec": 0, 00:24:30.165 "disable_auto_failback": false, 00:24:30.165 "generate_uuids": false, 00:24:30.165 "transport_tos": 0, 00:24:30.165 "nvme_error_stat": false, 00:24:30.165 "rdma_srq_size": 0, 00:24:30.165 "io_path_stat": false, 00:24:30.165 "allow_accel_sequence": false, 00:24:30.165 "rdma_max_cq_size": 0, 00:24:30.165 "rdma_cm_event_timeout_ms": 0, 00:24:30.165 "dhchap_digests": [ 00:24:30.165 "sha256", 00:24:30.165 "sha384", 00:24:30.165 "sha512" 00:24:30.165 ], 00:24:30.165 "dhchap_dhgroups": [ 00:24:30.165 "null", 00:24:30.165 "ffdhe2048", 00:24:30.165 "ffdhe3072", 00:24:30.165 "ffdhe4096", 00:24:30.165 "ffdhe6144", 00:24:30.165 "ffdhe8192" 00:24:30.165 ] 00:24:30.165 } 00:24:30.165 }, 00:24:30.165 { 00:24:30.165 "method": "bdev_nvme_set_hotplug", 00:24:30.165 "params": { 00:24:30.165 "period_us": 100000, 00:24:30.165 "enable": false 00:24:30.165 } 00:24:30.165 }, 00:24:30.165 { 00:24:30.165 "method": "bdev_malloc_create", 00:24:30.165 "params": { 00:24:30.165 "name": "malloc0", 00:24:30.165 "num_blocks": 8192, 00:24:30.165 "block_size": 4096, 00:24:30.165 "physical_block_size": 4096, 00:24:30.165 "uuid": "e7fbe78b-1b92-4635-9338-000a6ba6231d", 00:24:30.165 "optimal_io_boundary": 0, 00:24:30.165 "md_size": 0, 00:24:30.165 "dif_type": 0, 00:24:30.165 "dif_is_head_of_md": false, 00:24:30.165 "dif_pi_format": 0 00:24:30.165 } 00:24:30.165 }, 00:24:30.165 { 00:24:30.165 "method": "bdev_wait_for_examine" 00:24:30.165 } 00:24:30.165 ] 00:24:30.165 }, 00:24:30.165 { 00:24:30.165 "subsystem": "nbd", 00:24:30.165 "config": [] 00:24:30.165 }, 00:24:30.165 { 00:24:30.165 "subsystem": "scheduler", 00:24:30.165 "config": [ 00:24:30.165 { 00:24:30.165 "method": "framework_set_scheduler", 00:24:30.165 "params": { 00:24:30.165 "name": "static" 00:24:30.165 } 00:24:30.165 } 00:24:30.165 ] 00:24:30.165 }, 00:24:30.165 { 00:24:30.165 "subsystem": "nvmf", 00:24:30.165 "config": [ 00:24:30.165 { 00:24:30.165 "method": "nvmf_set_config", 00:24:30.165 "params": { 00:24:30.165 "discovery_filter": "match_any", 00:24:30.165 "admin_cmd_passthru": { 00:24:30.165 "identify_ctrlr": false 00:24:30.165 }, 00:24:30.165 "dhchap_digests": [ 00:24:30.165 "sha256", 00:24:30.165 "sha384", 00:24:30.165 "sha512" 00:24:30.165 ], 00:24:30.165 "dhchap_dhgroups": [ 00:24:30.165 "null", 00:24:30.165 "ffdhe2048", 00:24:30.165 "ffdhe3072", 00:24:30.165 "ffdhe4096", 00:24:30.165 "ffdhe6144", 00:24:30.165 "ffdhe8192" 00:24:30.165 ] 00:24:30.165 } 00:24:30.165 }, 00:24:30.165 { 00:24:30.165 "method": "nvmf_set_max_subsystems", 00:24:30.165 "params": { 00:24:30.165 "max_subsystems": 1024 00:24:30.165 } 00:24:30.165 }, 00:24:30.165 { 00:24:30.165 "method": "nvmf_set_crdt", 00:24:30.165 "params": { 00:24:30.165 "crdt1": 0, 00:24:30.165 "crdt2": 0, 00:24:30.165 "crdt3": 0 00:24:30.165 } 00:24:30.165 }, 00:24:30.165 { 00:24:30.165 "method": "nvmf_create_transport", 00:24:30.165 "params": { 00:24:30.165 "trtype": "TCP", 00:24:30.165 "max_queue_depth": 128, 00:24:30.165 "max_io_qpairs_per_ctrlr": 127, 00:24:30.165 "in_capsule_data_size": 4096, 00:24:30.165 "max_io_size": 131072, 00:24:30.165 "io_unit_size": 131072, 00:24:30.165 "max_aq_depth": 128, 00:24:30.165 "num_shared_buffers": 511, 00:24:30.165 "buf_cache_size": 4294967295, 00:24:30.165 "dif_insert_or_strip": false, 00:24:30.165 "zcopy": false, 00:24:30.165 "c2h_success": false, 00:24:30.165 "sock_priority": 0, 00:24:30.165 "abort_timeout_sec": 1, 00:24:30.165 "ack_timeout": 0, 00:24:30.165 "data_wr_pool_size": 0 00:24:30.165 } 00:24:30.165 }, 00:24:30.165 { 00:24:30.165 "method": "nvmf_create_subsystem", 00:24:30.166 "params": { 00:24:30.166 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:30.166 "allow_any_host": false, 00:24:30.166 "serial_number": "00000000000000000000", 00:24:30.166 "model_number": "SPDK bdev Controller", 00:24:30.166 "max_namespaces": 32, 00:24:30.166 "min_cntlid": 1, 00:24:30.166 "max_cntlid": 65519, 00:24:30.166 "ana_reporting": false 00:24:30.166 } 00:24:30.166 }, 00:24:30.166 { 00:24:30.166 "method": "nvmf_subsystem_add_host", 00:24:30.166 "params": { 00:24:30.166 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:30.166 "host": "nqn.2016-06.io.spdk:host1", 00:24:30.166 "psk": "key0" 00:24:30.166 } 00:24:30.166 }, 00:24:30.166 { 00:24:30.166 "method": "nvmf_subsystem_add_ns", 00:24:30.166 "params": { 00:24:30.166 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:30.166 "namespace": { 00:24:30.166 "nsid": 1, 00:24:30.166 "bdev_name": "malloc0", 00:24:30.166 "nguid": "E7FBE78B1B9246359338000A6BA6231D", 00:24:30.166 "uuid": "e7fbe78b-1b92-4635-9338-000a6ba6231d", 00:24:30.166 "no_auto_visible": false 00:24:30.166 } 00:24:30.166 } 00:24:30.166 }, 00:24:30.166 { 00:24:30.166 "method": "nvmf_subsystem_add_listener", 00:24:30.166 "params": { 00:24:30.166 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:30.166 "listen_address": { 00:24:30.166 "trtype": "TCP", 00:24:30.166 "adrfam": "IPv4", 00:24:30.166 "traddr": "10.0.0.2", 00:24:30.166 "trsvcid": "4420" 00:24:30.166 }, 00:24:30.166 "secure_channel": false, 00:24:30.166 "sock_impl": "ssl" 00:24:30.166 } 00:24:30.166 } 00:24:30.166 ] 00:24:30.166 } 00:24:30.166 ] 00:24:30.166 }' 00:24:30.166 21:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:30.166 21:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.166 21:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3036447 00:24:30.166 21:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:30.166 21:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3036447 00:24:30.166 21:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3036447 ']' 00:24:30.166 21:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.166 21:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:30.166 21:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.166 21:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:30.166 21:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.423 [2024-11-19 21:14:04.026637] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:30.423 [2024-11-19 21:14:04.026790] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.423 [2024-11-19 21:14:04.194148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.681 [2024-11-19 21:14:04.328850] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.681 [2024-11-19 21:14:04.328945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.681 [2024-11-19 21:14:04.328976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.681 [2024-11-19 21:14:04.329002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.681 [2024-11-19 21:14:04.329022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.681 [2024-11-19 21:14:04.330727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.249 [2024-11-19 21:14:04.871573] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.249 [2024-11-19 21:14:04.903595] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:31.249 [2024-11-19 21:14:04.903952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.249 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:31.249 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:31.249 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:31.249 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:31.249 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.249 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.249 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3036599 00:24:31.249 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3036599 /var/tmp/bdevperf.sock 00:24:31.249 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3036599 ']' 00:24:31.249 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.249 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:31.249 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.249 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.249 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:31.249 "subsystems": [ 00:24:31.249 { 00:24:31.249 "subsystem": "keyring", 00:24:31.249 "config": [ 00:24:31.249 { 00:24:31.249 "method": "keyring_file_add_key", 00:24:31.249 "params": { 00:24:31.249 "name": "key0", 00:24:31.249 "path": "/tmp/tmp.cxjXfAwzeZ" 00:24:31.249 } 00:24:31.249 } 00:24:31.249 ] 00:24:31.249 }, 00:24:31.249 { 00:24:31.249 "subsystem": "iobuf", 00:24:31.249 "config": [ 00:24:31.249 { 00:24:31.249 "method": "iobuf_set_options", 00:24:31.249 "params": { 00:24:31.249 "small_pool_count": 8192, 00:24:31.249 "large_pool_count": 1024, 00:24:31.249 "small_bufsize": 8192, 00:24:31.249 "large_bufsize": 135168, 00:24:31.249 "enable_numa": false 00:24:31.249 } 00:24:31.249 } 00:24:31.249 ] 00:24:31.249 }, 00:24:31.249 { 00:24:31.249 "subsystem": "sock", 00:24:31.249 "config": [ 00:24:31.249 { 00:24:31.249 "method": "sock_set_default_impl", 00:24:31.249 "params": { 00:24:31.249 "impl_name": "posix" 00:24:31.249 } 00:24:31.249 }, 00:24:31.249 { 00:24:31.249 "method": "sock_impl_set_options", 00:24:31.249 "params": { 00:24:31.249 "impl_name": "ssl", 00:24:31.249 "recv_buf_size": 4096, 00:24:31.249 "send_buf_size": 4096, 00:24:31.249 "enable_recv_pipe": true, 00:24:31.249 "enable_quickack": false, 00:24:31.249 "enable_placement_id": 0, 00:24:31.249 "enable_zerocopy_send_server": true, 00:24:31.249 "enable_zerocopy_send_client": false, 00:24:31.249 "zerocopy_threshold": 0, 00:24:31.249 "tls_version": 0, 00:24:31.249 "enable_ktls": false 00:24:31.249 } 00:24:31.249 }, 00:24:31.249 { 00:24:31.249 "method": "sock_impl_set_options", 00:24:31.249 "params": { 00:24:31.249 "impl_name": "posix", 00:24:31.249 "recv_buf_size": 2097152, 00:24:31.249 "send_buf_size": 2097152, 00:24:31.249 "enable_recv_pipe": true, 00:24:31.249 "enable_quickack": false, 00:24:31.249 "enable_placement_id": 0, 00:24:31.249 "enable_zerocopy_send_server": true, 00:24:31.249 "enable_zerocopy_send_client": false, 00:24:31.249 "zerocopy_threshold": 0, 00:24:31.249 "tls_version": 0, 00:24:31.249 "enable_ktls": false 00:24:31.249 } 00:24:31.249 } 00:24:31.249 ] 00:24:31.249 }, 00:24:31.249 { 00:24:31.249 "subsystem": "vmd", 00:24:31.249 "config": [] 00:24:31.249 }, 00:24:31.249 { 00:24:31.249 "subsystem": "accel", 00:24:31.249 "config": [ 00:24:31.249 { 00:24:31.249 "method": "accel_set_options", 00:24:31.249 "params": { 00:24:31.249 "small_cache_size": 128, 00:24:31.249 "large_cache_size": 16, 00:24:31.249 "task_count": 2048, 00:24:31.249 "sequence_count": 2048, 00:24:31.249 "buf_count": 2048 00:24:31.249 } 00:24:31.249 } 00:24:31.249 ] 00:24:31.249 }, 00:24:31.249 { 00:24:31.249 "subsystem": "bdev", 00:24:31.249 "config": [ 00:24:31.249 { 00:24:31.249 "method": "bdev_set_options", 00:24:31.249 "params": { 00:24:31.249 "bdev_io_pool_size": 65535, 00:24:31.249 "bdev_io_cache_size": 256, 00:24:31.249 "bdev_auto_examine": true, 00:24:31.249 "iobuf_small_cache_size": 128, 00:24:31.249 "iobuf_large_cache_size": 16 00:24:31.249 } 00:24:31.249 }, 00:24:31.249 { 00:24:31.249 "method": "bdev_raid_set_options", 00:24:31.249 "params": { 00:24:31.249 "process_window_size_kb": 1024, 00:24:31.249 "process_max_bandwidth_mb_sec": 0 00:24:31.249 } 00:24:31.249 }, 00:24:31.249 { 00:24:31.249 "method": "bdev_iscsi_set_options", 00:24:31.249 "params": { 00:24:31.249 "timeout_sec": 30 00:24:31.249 } 00:24:31.249 }, 00:24:31.249 { 00:24:31.249 "method": "bdev_nvme_set_options", 00:24:31.249 "params": { 00:24:31.249 "action_on_timeout": "none", 00:24:31.249 "timeout_us": 0, 00:24:31.249 "timeout_admin_us": 0, 00:24:31.249 "keep_alive_timeout_ms": 10000, 00:24:31.249 "arbitration_burst": 0, 00:24:31.249 "low_priority_weight": 0, 00:24:31.249 "medium_priority_weight": 0, 00:24:31.249 "high_priority_weight": 0, 00:24:31.249 "nvme_adminq_poll_period_us": 10000, 00:24:31.249 "nvme_ioq_poll_period_us": 0, 00:24:31.249 "io_queue_requests": 512, 00:24:31.249 "delay_cmd_submit": true, 00:24:31.249 "transport_retry_count": 4, 00:24:31.249 "bdev_retry_count": 3, 00:24:31.249 "transport_ack_timeout": 0, 00:24:31.249 "ctrlr_loss_timeout_sec": 0, 00:24:31.249 "reconnect_delay_sec": 0, 00:24:31.249 "fast_io_fail_timeout_sec": 0, 00:24:31.249 "disable_auto_failback": false, 00:24:31.249 "generate_uuids": false, 00:24:31.249 "transport_tos": 0, 00:24:31.249 "nvme_error_stat": false, 00:24:31.249 "rdma_srq_size": 0, 00:24:31.250 "io_path_stat": false, 00:24:31.250 "allow_accel_sequence": false, 00:24:31.250 "rdma_max_cq_size": 0, 00:24:31.250 "rdma_cm_event_timeout_ms": 0, 00:24:31.250 "dhchap_digests": [ 00:24:31.250 "sha256", 00:24:31.250 "sha384", 00:24:31.250 "sha512" 00:24:31.250 ], 00:24:31.250 "dhchap_dhgroups": [ 00:24:31.250 "null", 00:24:31.250 "ffdhe2048", 00:24:31.250 "ffdhe3072", 00:24:31.250 "ffdhe4096", 00:24:31.250 "ffdhe6144", 00:24:31.250 "ffdhe8192" 00:24:31.250 ] 00:24:31.250 } 00:24:31.250 }, 00:24:31.250 { 00:24:31.250 "method": "bdev_nvme_attach_controller", 00:24:31.250 "params": { 00:24:31.250 "name": "nvme0", 00:24:31.250 "trtype": "TCP", 00:24:31.250 "adrfam": "IPv4", 00:24:31.250 "traddr": "10.0.0.2", 00:24:31.250 "trsvcid": "4420", 00:24:31.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.250 "prchk_reftag": false, 00:24:31.250 "prchk_guard": false, 00:24:31.250 "ctrlr_loss_timeout_sec": 0, 00:24:31.250 "reconnect_delay_sec": 0, 00:24:31.250 "fast_io_fail_timeout_sec": 0, 00:24:31.250 "psk": "key0", 00:24:31.250 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:31.250 "hdgst": false, 00:24:31.250 "ddgst": false, 00:24:31.250 "multipath": "multipath" 00:24:31.250 } 00:24:31.250 }, 00:24:31.250 { 00:24:31.250 "method": "bdev_nvme_set_hotplug", 00:24:31.250 "params": { 00:24:31.250 "period_us": 100000, 00:24:31.250 "enable": false 00:24:31.250 } 00:24:31.250 }, 00:24:31.250 { 00:24:31.250 "method": "bdev_enable_histogram", 00:24:31.250 "params": { 00:24:31.250 "name": "nvme0n1", 00:24:31.250 "enable": true 00:24:31.250 } 00:24:31.250 }, 00:24:31.250 { 00:24:31.250 "method": "bdev_wait_for_examine" 00:24:31.250 } 00:24:31.250 ] 00:24:31.250 }, 00:24:31.250 { 00:24:31.250 "subsystem": "nbd", 00:24:31.250 "config": [] 00:24:31.250 } 00:24:31.250 ] 00:24:31.250 }' 00:24:31.250 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.250 21:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.507 [2024-11-19 21:14:05.110919] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:31.507 [2024-11-19 21:14:05.111091] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3036599 ] 00:24:31.507 [2024-11-19 21:14:05.255392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.764 [2024-11-19 21:14:05.391817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.330 [2024-11-19 21:14:05.834479] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:32.330 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.330 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:32.330 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:32.330 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:32.588 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.588 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:32.847 Running I/O for 1 seconds... 00:24:33.780 2534.00 IOPS, 9.90 MiB/s 00:24:33.780 Latency(us) 00:24:33.780 [2024-11-19T20:14:07.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.780 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:33.780 Verification LBA range: start 0x0 length 0x2000 00:24:33.780 nvme0n1 : 1.03 2589.53 10.12 0.00 0.00 48855.12 8932.31 42525.58 00:24:33.780 [2024-11-19T20:14:07.575Z] =================================================================================================================== 00:24:33.780 [2024-11-19T20:14:07.575Z] Total : 2589.53 10.12 0.00 0.00 48855.12 8932.31 42525.58 00:24:33.780 { 00:24:33.780 "results": [ 00:24:33.780 { 00:24:33.780 "job": "nvme0n1", 00:24:33.780 "core_mask": "0x2", 00:24:33.780 "workload": "verify", 00:24:33.780 "status": "finished", 00:24:33.780 "verify_range": { 00:24:33.780 "start": 0, 00:24:33.780 "length": 8192 00:24:33.780 }, 00:24:33.780 "queue_depth": 128, 00:24:33.780 "io_size": 4096, 00:24:33.781 "runtime": 1.027984, 00:24:33.781 "iops": 2589.5344674625285, 00:24:33.781 "mibps": 10.115369013525502, 00:24:33.781 "io_failed": 0, 00:24:33.781 "io_timeout": 0, 00:24:33.781 "avg_latency_us": 48855.12120878204, 00:24:33.781 "min_latency_us": 8932.314074074075, 00:24:33.781 "max_latency_us": 42525.58222222222 00:24:33.781 } 00:24:33.781 ], 00:24:33.781 "core_count": 1 00:24:33.781 } 00:24:33.781 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:33.781 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:33.781 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:33.781 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:33.781 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:33.781 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:33.781 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:33.781 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:33.781 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:33.781 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:33.781 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:33.781 nvmf_trace.0 00:24:33.781 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:33.781 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3036599 00:24:33.781 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3036599 ']' 00:24:33.781 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3036599 00:24:33.781 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:33.781 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.781 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3036599 00:24:34.039 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:34.039 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:34.039 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3036599' 00:24:34.039 killing process with pid 3036599 00:24:34.039 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3036599 00:24:34.039 Received shutdown signal, test time was about 1.000000 seconds 00:24:34.039 00:24:34.039 Latency(us) 00:24:34.039 [2024-11-19T20:14:07.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.039 [2024-11-19T20:14:07.834Z] =================================================================================================================== 00:24:34.039 [2024-11-19T20:14:07.834Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:34.039 21:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3036599 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:34.974 rmmod nvme_tcp 00:24:34.974 rmmod nvme_fabrics 00:24:34.974 rmmod nvme_keyring 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3036447 ']' 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3036447 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3036447 ']' 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3036447 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3036447 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3036447' 00:24:34.974 killing process with pid 3036447 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3036447 00:24:34.974 21:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3036447 00:24:36.348 21:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:36.348 21:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:36.348 21:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:36.348 21:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:36.348 21:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:36.348 21:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:36.348 21:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:36.348 21:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:36.348 21:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:36.348 21:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.348 21:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.348 21:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.253 21:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:38.253 21:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.yPzNOzrgnR /tmp/tmp.JQITxZD7mL /tmp/tmp.cxjXfAwzeZ 00:24:38.253 00:24:38.253 real 1m52.344s 00:24:38.253 user 3m6.662s 00:24:38.253 sys 0m27.021s 00:24:38.253 21:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:38.253 21:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.253 ************************************ 00:24:38.253 END TEST nvmf_tls 00:24:38.253 ************************************ 00:24:38.253 21:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:38.253 21:14:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:38.253 21:14:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:38.254 21:14:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:38.254 ************************************ 00:24:38.254 START TEST nvmf_fips 00:24:38.254 ************************************ 00:24:38.254 21:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:38.254 * Looking for test storage... 00:24:38.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:38.254 21:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:38.254 21:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:24:38.254 21:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:38.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.513 --rc genhtml_branch_coverage=1 00:24:38.513 --rc genhtml_function_coverage=1 00:24:38.513 --rc genhtml_legend=1 00:24:38.513 --rc geninfo_all_blocks=1 00:24:38.513 --rc geninfo_unexecuted_blocks=1 00:24:38.513 00:24:38.513 ' 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:38.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.513 --rc genhtml_branch_coverage=1 00:24:38.513 --rc genhtml_function_coverage=1 00:24:38.513 --rc genhtml_legend=1 00:24:38.513 --rc geninfo_all_blocks=1 00:24:38.513 --rc geninfo_unexecuted_blocks=1 00:24:38.513 00:24:38.513 ' 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:38.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.513 --rc genhtml_branch_coverage=1 00:24:38.513 --rc genhtml_function_coverage=1 00:24:38.513 --rc genhtml_legend=1 00:24:38.513 --rc geninfo_all_blocks=1 00:24:38.513 --rc geninfo_unexecuted_blocks=1 00:24:38.513 00:24:38.513 ' 00:24:38.513 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:38.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.513 --rc genhtml_branch_coverage=1 00:24:38.513 --rc genhtml_function_coverage=1 00:24:38.513 --rc genhtml_legend=1 00:24:38.513 --rc geninfo_all_blocks=1 00:24:38.513 --rc geninfo_unexecuted_blocks=1 00:24:38.513 00:24:38.514 ' 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:38.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:38.514 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:38.515 Error setting digest 00:24:38.515 40A23FD3697F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:38.515 40A23FD3697F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:38.515 21:14:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:40.415 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:40.415 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:40.415 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:40.416 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:40.416 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:40.416 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:40.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:40.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:24:40.674 00:24:40.674 --- 10.0.0.2 ping statistics --- 00:24:40.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.674 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:40.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:40.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:24:40.674 00:24:40.674 --- 10.0.0.1 ping statistics --- 00:24:40.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.674 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3039099 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3039099 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3039099 ']' 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:40.674 21:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:40.933 [2024-11-19 21:14:14.490986] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:40.933 [2024-11-19 21:14:14.491154] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.933 [2024-11-19 21:14:14.630549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.192 [2024-11-19 21:14:14.768693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:41.192 [2024-11-19 21:14:14.768787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:41.192 [2024-11-19 21:14:14.768812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:41.192 [2024-11-19 21:14:14.768836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:41.192 [2024-11-19 21:14:14.768863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:41.192 [2024-11-19 21:14:14.770518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.757 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:41.757 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:41.757 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:41.757 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:41.757 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:41.757 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:41.758 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:41.758 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:41.758 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:41.758 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Nmu 00:24:41.758 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:41.758 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Nmu 00:24:41.758 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Nmu 00:24:41.758 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Nmu 00:24:41.758 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:42.016 [2024-11-19 21:14:15.688959] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.016 [2024-11-19 21:14:15.704934] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:42.016 [2024-11-19 21:14:15.705252] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.016 malloc0 00:24:42.016 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:42.016 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3039283 00:24:42.016 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:42.016 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3039283 /var/tmp/bdevperf.sock 00:24:42.016 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3039283 ']' 00:24:42.016 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:42.016 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.016 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:42.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:42.016 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.016 21:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:42.274 [2024-11-19 21:14:15.920952] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:24:42.274 [2024-11-19 21:14:15.921139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3039283 ] 00:24:42.274 [2024-11-19 21:14:16.056646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.532 [2024-11-19 21:14:16.176632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.097 21:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.097 21:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:43.097 21:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Nmu 00:24:43.355 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:43.614 [2024-11-19 21:14:17.366471] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:43.872 TLSTESTn1 00:24:43.872 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:43.872 Running I/O for 10 seconds... 00:24:46.176 2511.00 IOPS, 9.81 MiB/s [2024-11-19T20:14:20.904Z] 2552.50 IOPS, 9.97 MiB/s [2024-11-19T20:14:21.838Z] 2562.33 IOPS, 10.01 MiB/s [2024-11-19T20:14:22.772Z] 2567.50 IOPS, 10.03 MiB/s [2024-11-19T20:14:23.705Z] 2574.60 IOPS, 10.06 MiB/s [2024-11-19T20:14:24.722Z] 2576.33 IOPS, 10.06 MiB/s [2024-11-19T20:14:25.659Z] 2579.57 IOPS, 10.08 MiB/s [2024-11-19T20:14:27.032Z] 2580.25 IOPS, 10.08 MiB/s [2024-11-19T20:14:27.967Z] 2580.56 IOPS, 10.08 MiB/s [2024-11-19T20:14:27.967Z] 2582.80 IOPS, 10.09 MiB/s 00:24:54.172 Latency(us) 00:24:54.172 [2024-11-19T20:14:27.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.172 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:54.172 Verification LBA range: start 0x0 length 0x2000 00:24:54.172 TLSTESTn1 : 10.03 2587.18 10.11 0.00 0.00 49375.34 14660.65 38059.43 00:24:54.172 [2024-11-19T20:14:27.967Z] =================================================================================================================== 00:24:54.172 [2024-11-19T20:14:27.967Z] Total : 2587.18 10.11 0.00 0.00 49375.34 14660.65 38059.43 00:24:54.172 { 00:24:54.172 "results": [ 00:24:54.172 { 00:24:54.172 "job": "TLSTESTn1", 00:24:54.172 "core_mask": "0x4", 00:24:54.172 "workload": "verify", 00:24:54.172 "status": "finished", 00:24:54.172 "verify_range": { 00:24:54.172 "start": 0, 00:24:54.172 "length": 8192 00:24:54.172 }, 00:24:54.172 "queue_depth": 128, 00:24:54.172 "io_size": 4096, 00:24:54.172 "runtime": 10.032153, 00:24:54.172 "iops": 2587.1814355303395, 00:24:54.172 "mibps": 10.106177482540389, 00:24:54.172 "io_failed": 0, 00:24:54.172 "io_timeout": 0, 00:24:54.172 "avg_latency_us": 49375.33684242671, 00:24:54.172 "min_latency_us": 14660.645925925926, 00:24:54.172 "max_latency_us": 38059.42518518519 00:24:54.172 } 00:24:54.172 ], 00:24:54.172 "core_count": 1 00:24:54.172 } 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:54.172 nvmf_trace.0 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3039283 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3039283 ']' 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3039283 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3039283 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3039283' 00:24:54.172 killing process with pid 3039283 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3039283 00:24:54.172 Received shutdown signal, test time was about 10.000000 seconds 00:24:54.172 00:24:54.172 Latency(us) 00:24:54.172 [2024-11-19T20:14:27.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.172 [2024-11-19T20:14:27.967Z] =================================================================================================================== 00:24:54.172 [2024-11-19T20:14:27.967Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:54.172 21:14:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3039283 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:55.108 rmmod nvme_tcp 00:24:55.108 rmmod nvme_fabrics 00:24:55.108 rmmod nvme_keyring 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3039099 ']' 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3039099 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3039099 ']' 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3039099 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3039099 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3039099' 00:24:55.108 killing process with pid 3039099 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3039099 00:24:55.108 21:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3039099 00:24:56.482 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:56.482 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:56.482 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:56.482 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:56.482 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:56.482 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:56.482 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:56.482 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:56.482 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:56.482 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.482 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:56.482 21:14:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.383 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:58.383 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Nmu 00:24:58.383 00:24:58.383 real 0m20.112s 00:24:58.383 user 0m27.232s 00:24:58.383 sys 0m5.299s 00:24:58.383 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:58.383 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:58.383 ************************************ 00:24:58.383 END TEST nvmf_fips 00:24:58.383 ************************************ 00:24:58.383 21:14:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:58.383 21:14:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:58.383 21:14:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:58.383 21:14:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:58.383 ************************************ 00:24:58.383 START TEST nvmf_control_msg_list 00:24:58.383 ************************************ 00:24:58.383 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:58.383 * Looking for test storage... 00:24:58.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:58.383 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:58.383 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:24:58.383 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:58.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.642 --rc genhtml_branch_coverage=1 00:24:58.642 --rc genhtml_function_coverage=1 00:24:58.642 --rc genhtml_legend=1 00:24:58.642 --rc geninfo_all_blocks=1 00:24:58.642 --rc geninfo_unexecuted_blocks=1 00:24:58.642 00:24:58.642 ' 00:24:58.642 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:58.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.642 --rc genhtml_branch_coverage=1 00:24:58.642 --rc genhtml_function_coverage=1 00:24:58.642 --rc genhtml_legend=1 00:24:58.643 --rc geninfo_all_blocks=1 00:24:58.643 --rc geninfo_unexecuted_blocks=1 00:24:58.643 00:24:58.643 ' 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:58.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.643 --rc genhtml_branch_coverage=1 00:24:58.643 --rc genhtml_function_coverage=1 00:24:58.643 --rc genhtml_legend=1 00:24:58.643 --rc geninfo_all_blocks=1 00:24:58.643 --rc geninfo_unexecuted_blocks=1 00:24:58.643 00:24:58.643 ' 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:58.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.643 --rc genhtml_branch_coverage=1 00:24:58.643 --rc genhtml_function_coverage=1 00:24:58.643 --rc genhtml_legend=1 00:24:58.643 --rc geninfo_all_blocks=1 00:24:58.643 --rc geninfo_unexecuted_blocks=1 00:24:58.643 00:24:58.643 ' 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:58.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:58.643 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:00.545 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:00.545 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.545 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:00.546 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:00.546 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.546 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:00.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:25:00.805 00:25:00.805 --- 10.0.0.2 ping statistics --- 00:25:00.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.805 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:25:00.805 00:25:00.805 --- 10.0.0.1 ping statistics --- 00:25:00.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.805 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3042905 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3042905 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3042905 ']' 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.805 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.805 [2024-11-19 21:14:34.467626] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:25:00.805 [2024-11-19 21:14:34.467775] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.064 [2024-11-19 21:14:34.612317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.064 [2024-11-19 21:14:34.735908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.064 [2024-11-19 21:14:34.736000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.064 [2024-11-19 21:14:34.736023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:01.064 [2024-11-19 21:14:34.736044] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:01.064 [2024-11-19 21:14:34.736084] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.064 [2024-11-19 21:14:34.737526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:01.999 [2024-11-19 21:14:35.507518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:01.999 Malloc0 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.999 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:02.000 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.000 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:02.000 [2024-11-19 21:14:35.578769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.000 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.000 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3043056 00:25:02.000 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:02.000 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3043057 00:25:02.000 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:02.000 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3043058 00:25:02.000 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:02.000 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3043056 00:25:02.000 [2024-11-19 21:14:35.699603] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:02.000 [2024-11-19 21:14:35.700053] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:02.000 [2024-11-19 21:14:35.700493] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:03.374 Initializing NVMe Controllers 00:25:03.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:03.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:03.374 Initialization complete. Launching workers. 00:25:03.374 ======================================================== 00:25:03.374 Latency(us) 00:25:03.374 Device Information : IOPS MiB/s Average min max 00:25:03.374 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2258.98 8.82 442.05 218.96 41263.08 00:25:03.374 ======================================================== 00:25:03.374 Total : 2258.98 8.82 442.05 218.96 41263.08 00:25:03.374 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3043057 00:25:03.374 Initializing NVMe Controllers 00:25:03.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:03.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:03.374 Initialization complete. Launching workers. 00:25:03.374 ======================================================== 00:25:03.374 Latency(us) 00:25:03.374 Device Information : IOPS MiB/s Average min max 00:25:03.374 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1620.00 6.33 627.08 224.96 41223.96 00:25:03.374 ======================================================== 00:25:03.374 Total : 1620.00 6.33 627.08 224.96 41223.96 00:25:03.374 00:25:03.374 Initializing NVMe Controllers 00:25:03.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:03.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:03.374 Initialization complete. Launching workers. 00:25:03.374 ======================================================== 00:25:03.374 Latency(us) 00:25:03.374 Device Information : IOPS MiB/s Average min max 00:25:03.374 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40894.38 40744.90 41012.70 00:25:03.374 ======================================================== 00:25:03.374 Total : 25.00 0.10 40894.38 40744.90 41012.70 00:25:03.374 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3043058 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:03.374 rmmod nvme_tcp 00:25:03.374 rmmod nvme_fabrics 00:25:03.374 rmmod nvme_keyring 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3042905 ']' 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3042905 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3042905 ']' 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3042905 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3042905 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3042905' 00:25:03.374 killing process with pid 3042905 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3042905 00:25:03.374 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3042905 00:25:04.747 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:04.747 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:04.747 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:04.747 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:04.747 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:25:04.747 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:25:04.747 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:04.747 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:04.747 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:04.747 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.747 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:04.747 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:06.648 00:25:06.648 real 0m8.166s 00:25:06.648 user 0m7.662s 00:25:06.648 sys 0m2.832s 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:06.648 ************************************ 00:25:06.648 END TEST nvmf_control_msg_list 00:25:06.648 ************************************ 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:06.648 ************************************ 00:25:06.648 START TEST nvmf_wait_for_buf 00:25:06.648 ************************************ 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:06.648 * Looking for test storage... 00:25:06.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:06.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.648 --rc genhtml_branch_coverage=1 00:25:06.648 --rc genhtml_function_coverage=1 00:25:06.648 --rc genhtml_legend=1 00:25:06.648 --rc geninfo_all_blocks=1 00:25:06.648 --rc geninfo_unexecuted_blocks=1 00:25:06.648 00:25:06.648 ' 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:06.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.648 --rc genhtml_branch_coverage=1 00:25:06.648 --rc genhtml_function_coverage=1 00:25:06.648 --rc genhtml_legend=1 00:25:06.648 --rc geninfo_all_blocks=1 00:25:06.648 --rc geninfo_unexecuted_blocks=1 00:25:06.648 00:25:06.648 ' 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:06.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.648 --rc genhtml_branch_coverage=1 00:25:06.648 --rc genhtml_function_coverage=1 00:25:06.648 --rc genhtml_legend=1 00:25:06.648 --rc geninfo_all_blocks=1 00:25:06.648 --rc geninfo_unexecuted_blocks=1 00:25:06.648 00:25:06.648 ' 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:06.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.648 --rc genhtml_branch_coverage=1 00:25:06.648 --rc genhtml_function_coverage=1 00:25:06.648 --rc genhtml_legend=1 00:25:06.648 --rc geninfo_all_blocks=1 00:25:06.648 --rc geninfo_unexecuted_blocks=1 00:25:06.648 00:25:06.648 ' 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.648 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.649 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:06.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:06.907 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:06.908 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:06.908 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:06.908 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.908 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:06.908 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:06.908 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:06.908 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.908 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.908 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.908 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:06.908 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:06.908 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:06.908 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.814 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:08.815 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:08.815 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:08.815 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:08.815 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:08.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:25:08.815 00:25:08.815 --- 10.0.0.2 ping statistics --- 00:25:08.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.815 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:25:08.815 00:25:08.815 --- 10.0.0.1 ping statistics --- 00:25:08.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.815 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3045268 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3045268 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3045268 ']' 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.815 21:14:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:09.073 [2024-11-19 21:14:42.618771] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:25:09.073 [2024-11-19 21:14:42.618910] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:09.073 [2024-11-19 21:14:42.768225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.331 [2024-11-19 21:14:42.908328] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:09.331 [2024-11-19 21:14:42.908412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:09.331 [2024-11-19 21:14:42.908437] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:09.331 [2024-11-19 21:14:42.908461] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:09.331 [2024-11-19 21:14:42.908480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:09.331 [2024-11-19 21:14:42.910123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.897 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.897 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:25:09.897 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:09.897 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:09.897 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:09.897 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.897 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:09.897 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:09.897 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:09.897 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.897 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:09.897 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.897 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:09.897 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.897 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:09.897 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.897 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:09.897 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.897 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.464 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.464 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:10.464 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.464 21:14:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.464 Malloc0 00:25:10.464 21:14:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.464 21:14:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:10.464 21:14:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.464 21:14:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.464 [2024-11-19 21:14:44.005838] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.464 21:14:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.464 21:14:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:10.464 21:14:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.464 21:14:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.464 21:14:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.464 21:14:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:10.464 21:14:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.464 21:14:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.464 21:14:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.464 21:14:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:10.464 21:14:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.464 21:14:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.464 [2024-11-19 21:14:44.030168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.464 21:14:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.465 21:14:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:10.465 [2024-11-19 21:14:44.189231] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:11.840 Initializing NVMe Controllers 00:25:11.840 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:11.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:11.840 Initialization complete. Launching workers. 00:25:11.840 ======================================================== 00:25:11.840 Latency(us) 00:25:11.840 Device Information : IOPS MiB/s Average min max 00:25:11.840 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 125.00 15.62 33316.39 23978.29 63857.59 00:25:11.840 ======================================================== 00:25:11.840 Total : 125.00 15.62 33316.39 23978.29 63857.59 00:25:11.840 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1974 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1974 -eq 0 ]] 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:12.099 rmmod nvme_tcp 00:25:12.099 rmmod nvme_fabrics 00:25:12.099 rmmod nvme_keyring 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3045268 ']' 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3045268 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3045268 ']' 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3045268 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3045268 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3045268' 00:25:12.099 killing process with pid 3045268 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3045268 00:25:12.099 21:14:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3045268 00:25:13.035 21:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:13.035 21:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:13.035 21:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:13.035 21:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:13.035 21:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:13.035 21:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:13.035 21:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:13.035 21:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:13.036 21:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:13.036 21:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.036 21:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.036 21:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.586 21:14:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:15.586 00:25:15.586 real 0m8.563s 00:25:15.586 user 0m5.270s 00:25:15.586 sys 0m2.106s 00:25:15.586 21:14:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:15.586 21:14:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:15.587 ************************************ 00:25:15.587 END TEST nvmf_wait_for_buf 00:25:15.587 ************************************ 00:25:15.587 21:14:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:15.587 21:14:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:15.587 21:14:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:15.587 21:14:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:15.587 21:14:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:15.587 ************************************ 00:25:15.587 START TEST nvmf_fuzz 00:25:15.587 ************************************ 00:25:15.587 21:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:15.587 * Looking for test storage... 00:25:15.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:15.587 21:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:15.587 21:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:25:15.587 21:14:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:15.587 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:15.587 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:15.587 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:15.587 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:15.587 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:15.587 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:15.587 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:15.587 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:15.587 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:15.588 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:15.588 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:15.588 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:15.588 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:15.588 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:15.588 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:15.588 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:15.588 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:15.588 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:15.588 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:15.588 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:15.588 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:15.588 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:15.588 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:15.588 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:15.588 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:15.588 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:15.588 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:15.588 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:15.589 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:15.589 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:15.589 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:15.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.589 --rc genhtml_branch_coverage=1 00:25:15.589 --rc genhtml_function_coverage=1 00:25:15.589 --rc genhtml_legend=1 00:25:15.589 --rc geninfo_all_blocks=1 00:25:15.589 --rc geninfo_unexecuted_blocks=1 00:25:15.589 00:25:15.589 ' 00:25:15.589 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:15.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.589 --rc genhtml_branch_coverage=1 00:25:15.589 --rc genhtml_function_coverage=1 00:25:15.589 --rc genhtml_legend=1 00:25:15.589 --rc geninfo_all_blocks=1 00:25:15.589 --rc geninfo_unexecuted_blocks=1 00:25:15.589 00:25:15.589 ' 00:25:15.589 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:15.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.589 --rc genhtml_branch_coverage=1 00:25:15.589 --rc genhtml_function_coverage=1 00:25:15.589 --rc genhtml_legend=1 00:25:15.589 --rc geninfo_all_blocks=1 00:25:15.589 --rc geninfo_unexecuted_blocks=1 00:25:15.589 00:25:15.589 ' 00:25:15.589 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:15.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.589 --rc genhtml_branch_coverage=1 00:25:15.589 --rc genhtml_function_coverage=1 00:25:15.589 --rc genhtml_legend=1 00:25:15.589 --rc geninfo_all_blocks=1 00:25:15.589 --rc geninfo_unexecuted_blocks=1 00:25:15.589 00:25:15.589 ' 00:25:15.589 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:15.589 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:15.589 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.589 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.589 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.589 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.589 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.590 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.590 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.590 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.590 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.590 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.590 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:15.590 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:15.590 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.590 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.590 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:15.590 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:15.590 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:15.590 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:15.590 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.590 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.590 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.590 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:15.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.591 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.592 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:15.592 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:15.592 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:15.592 21:14:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:17.497 21:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:17.497 21:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:17.497 21:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:17.497 21:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:17.497 21:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:17.497 21:14:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:17.497 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:17.497 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:17.497 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:17.497 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:17.497 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:17.498 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:17.498 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:17.498 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:17.498 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:17.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:17.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:25:17.498 00:25:17.498 --- 10.0.0.2 ping statistics --- 00:25:17.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.498 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:17.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:25:17.498 00:25:17.498 --- 10.0.0.1 ping statistics --- 00:25:17.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.498 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3047628 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:17.498 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3047628 00:25:17.499 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3047628 ']' 00:25:17.499 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.499 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.499 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.499 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.499 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:18.876 Malloc0 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:18.876 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:51.055 Fuzzing completed. Shutting down the fuzz application 00:25:51.055 00:25:51.055 Dumping successful admin opcodes: 00:25:51.055 8, 9, 10, 24, 00:25:51.055 Dumping successful io opcodes: 00:25:51.055 0, 9, 00:25:51.055 NS: 0x2000008efec0 I/O qp, Total commands completed: 323193, total successful commands: 1906, random_seed: 4009711232 00:25:51.055 NS: 0x2000008efec0 admin qp, Total commands completed: 40704, total successful commands: 332, random_seed: 1448629952 00:25:51.055 21:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:51.314 Fuzzing completed. Shutting down the fuzz application 00:25:51.314 00:25:51.314 Dumping successful admin opcodes: 00:25:51.314 24, 00:25:51.314 Dumping successful io opcodes: 00:25:51.314 00:25:51.314 NS: 0x2000008efec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1488265482 00:25:51.314 NS: 0x2000008efec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1488484890 00:25:51.314 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:51.314 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.314 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:51.314 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.314 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:51.314 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:51.314 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:51.314 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:51.314 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:51.314 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:51.314 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:51.314 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:51.572 rmmod nvme_tcp 00:25:51.572 rmmod nvme_fabrics 00:25:51.572 rmmod nvme_keyring 00:25:51.572 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:51.572 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:51.572 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:51.572 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 3047628 ']' 00:25:51.572 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 3047628 00:25:51.572 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3047628 ']' 00:25:51.572 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 3047628 00:25:51.572 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:51.572 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:51.572 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3047628 00:25:51.572 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:51.572 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:51.572 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3047628' 00:25:51.572 killing process with pid 3047628 00:25:51.572 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 3047628 00:25:51.572 21:15:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 3047628 00:25:52.947 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:52.947 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:52.947 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:52.947 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:52.947 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:52.947 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:52.947 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:52.947 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:52.947 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:52.948 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.948 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:52.948 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.850 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:54.850 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:54.850 00:25:54.850 real 0m39.712s 00:25:54.850 user 0m57.160s 00:25:54.850 sys 0m13.146s 00:25:54.850 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:54.850 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:54.850 ************************************ 00:25:54.850 END TEST nvmf_fuzz 00:25:54.850 ************************************ 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:55.108 ************************************ 00:25:55.108 START TEST nvmf_multiconnection 00:25:55.108 ************************************ 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:55.108 * Looking for test storage... 00:25:55.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:55.108 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:55.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.109 --rc genhtml_branch_coverage=1 00:25:55.109 --rc genhtml_function_coverage=1 00:25:55.109 --rc genhtml_legend=1 00:25:55.109 --rc geninfo_all_blocks=1 00:25:55.109 --rc geninfo_unexecuted_blocks=1 00:25:55.109 00:25:55.109 ' 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:55.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.109 --rc genhtml_branch_coverage=1 00:25:55.109 --rc genhtml_function_coverage=1 00:25:55.109 --rc genhtml_legend=1 00:25:55.109 --rc geninfo_all_blocks=1 00:25:55.109 --rc geninfo_unexecuted_blocks=1 00:25:55.109 00:25:55.109 ' 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:55.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.109 --rc genhtml_branch_coverage=1 00:25:55.109 --rc genhtml_function_coverage=1 00:25:55.109 --rc genhtml_legend=1 00:25:55.109 --rc geninfo_all_blocks=1 00:25:55.109 --rc geninfo_unexecuted_blocks=1 00:25:55.109 00:25:55.109 ' 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:55.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.109 --rc genhtml_branch_coverage=1 00:25:55.109 --rc genhtml_function_coverage=1 00:25:55.109 --rc genhtml_legend=1 00:25:55.109 --rc geninfo_all_blocks=1 00:25:55.109 --rc geninfo_unexecuted_blocks=1 00:25:55.109 00:25:55.109 ' 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:55.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:55.109 21:15:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:57.010 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:57.010 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:57.010 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.010 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:57.011 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:57.011 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:57.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:57.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:25:57.270 00:25:57.270 --- 10.0.0.2 ping statistics --- 00:25:57.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.270 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:57.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:57.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:25:57.270 00:25:57.270 --- 10.0.0.1 ping statistics --- 00:25:57.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.270 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=3054236 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 3054236 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 3054236 ']' 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:57.270 21:15:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.270 [2024-11-19 21:15:31.032199] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:25:57.270 [2024-11-19 21:15:31.032339] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.528 [2024-11-19 21:15:31.190030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:57.787 [2024-11-19 21:15:31.335448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:57.787 [2024-11-19 21:15:31.335522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:57.787 [2024-11-19 21:15:31.335548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:57.787 [2024-11-19 21:15:31.335573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:57.787 [2024-11-19 21:15:31.335593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:57.787 [2024-11-19 21:15:31.338444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.787 [2024-11-19 21:15:31.338517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:57.787 [2024-11-19 21:15:31.338612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.787 [2024-11-19 21:15:31.338618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:58.353 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.353 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:58.353 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:58.353 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:58.353 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.353 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.353 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:58.353 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.353 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.353 [2024-11-19 21:15:32.079905] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.353 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.353 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:58.353 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.353 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:58.353 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.353 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.612 Malloc1 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.612 [2024-11-19 21:15:32.198145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.612 Malloc2 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.612 Malloc3 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:58.612 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.613 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.613 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.613 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.613 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:58.613 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.613 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.872 Malloc4 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.872 Malloc5 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.872 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.132 Malloc6 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.132 Malloc7 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.132 Malloc8 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.132 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.390 Malloc9 00:25:59.390 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.390 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:59.390 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.390 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.390 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.390 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:59.390 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.390 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.390 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.390 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:59.391 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.391 21:15:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.391 Malloc10 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.391 Malloc11 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.391 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.649 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.649 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:59.649 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.649 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.649 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.649 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:59.649 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.649 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.649 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.649 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:59.649 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.649 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:00.216 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:00.216 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:00.216 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:00.216 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:00.216 21:15:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:02.745 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:02.745 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:02.745 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:26:02.745 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:02.745 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:02.745 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:02.745 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:02.745 21:15:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:03.003 21:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:03.003 21:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:03.003 21:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:03.003 21:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:03.003 21:15:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:04.901 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:04.901 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:04.901 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:26:04.901 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:04.901 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:04.901 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:04.901 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.901 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:05.468 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:05.468 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:05.468 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:05.468 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:05.468 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:07.997 21:15:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:07.997 21:15:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:07.997 21:15:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:26:07.997 21:15:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:07.997 21:15:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:07.997 21:15:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:07.997 21:15:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.997 21:15:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:08.255 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:08.255 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:08.255 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:08.255 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:08.255 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:10.785 21:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:10.785 21:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:10.785 21:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:26:10.785 21:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:10.785 21:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:10.785 21:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:10.785 21:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.785 21:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:11.043 21:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:11.043 21:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:11.043 21:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:11.043 21:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:11.043 21:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:12.940 21:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:12.940 21:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:12.940 21:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:26:12.940 21:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:12.940 21:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:13.199 21:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:13.199 21:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.199 21:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:13.766 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:13.766 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:13.766 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:13.766 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:13.766 21:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:16.292 21:15:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:16.292 21:15:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:16.292 21:15:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:26:16.292 21:15:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:16.292 21:15:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:16.292 21:15:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:16.292 21:15:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.292 21:15:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:16.550 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:16.550 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:16.550 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:16.550 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:16.550 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:19.105 21:15:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:19.105 21:15:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:19.105 21:15:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:26:19.105 21:15:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:19.105 21:15:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:19.105 21:15:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:19.105 21:15:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:19.105 21:15:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:19.698 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:19.698 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:19.698 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:19.698 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:19.698 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:21.593 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:21.593 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:21.594 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:26:21.594 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:21.594 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:21.594 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:21.594 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.594 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:22.528 21:15:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:22.528 21:15:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:22.528 21:15:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:22.528 21:15:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:22.528 21:15:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:24.425 21:15:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:24.425 21:15:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:24.425 21:15:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:26:24.425 21:15:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:24.425 21:15:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:24.425 21:15:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:24.425 21:15:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.425 21:15:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:25.360 21:15:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:25.360 21:15:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:25.360 21:15:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:25.360 21:15:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:25.360 21:15:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:27.273 21:16:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:27.273 21:16:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:27.273 21:16:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:27.273 21:16:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:27.273 21:16:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:27.273 21:16:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:27.273 21:16:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:27.273 21:16:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:28.207 21:16:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:28.207 21:16:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:28.207 21:16:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:28.207 21:16:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:28.207 21:16:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:30.108 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:30.108 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:30.108 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:30.108 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:30.108 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:30.108 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:30.108 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:30.108 [global] 00:26:30.108 thread=1 00:26:30.108 invalidate=1 00:26:30.108 rw=read 00:26:30.108 time_based=1 00:26:30.108 runtime=10 00:26:30.108 ioengine=libaio 00:26:30.108 direct=1 00:26:30.108 bs=262144 00:26:30.108 iodepth=64 00:26:30.108 norandommap=1 00:26:30.108 numjobs=1 00:26:30.108 00:26:30.108 [job0] 00:26:30.108 filename=/dev/nvme0n1 00:26:30.108 [job1] 00:26:30.108 filename=/dev/nvme10n1 00:26:30.108 [job2] 00:26:30.108 filename=/dev/nvme1n1 00:26:30.108 [job3] 00:26:30.108 filename=/dev/nvme2n1 00:26:30.108 [job4] 00:26:30.108 filename=/dev/nvme3n1 00:26:30.108 [job5] 00:26:30.108 filename=/dev/nvme4n1 00:26:30.108 [job6] 00:26:30.108 filename=/dev/nvme5n1 00:26:30.108 [job7] 00:26:30.108 filename=/dev/nvme6n1 00:26:30.108 [job8] 00:26:30.108 filename=/dev/nvme7n1 00:26:30.108 [job9] 00:26:30.108 filename=/dev/nvme8n1 00:26:30.108 [job10] 00:26:30.108 filename=/dev/nvme9n1 00:26:30.366 Could not set queue depth (nvme0n1) 00:26:30.366 Could not set queue depth (nvme10n1) 00:26:30.366 Could not set queue depth (nvme1n1) 00:26:30.366 Could not set queue depth (nvme2n1) 00:26:30.366 Could not set queue depth (nvme3n1) 00:26:30.366 Could not set queue depth (nvme4n1) 00:26:30.366 Could not set queue depth (nvme5n1) 00:26:30.366 Could not set queue depth (nvme6n1) 00:26:30.366 Could not set queue depth (nvme7n1) 00:26:30.366 Could not set queue depth (nvme8n1) 00:26:30.366 Could not set queue depth (nvme9n1) 00:26:30.366 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.366 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.366 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.366 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.366 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.366 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.366 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.366 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.366 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.366 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.366 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.366 fio-3.35 00:26:30.366 Starting 11 threads 00:26:42.567 00:26:42.567 job0: (groupid=0, jobs=1): err= 0: pid=3058616: Tue Nov 19 21:16:14 2024 00:26:42.567 read: IOPS=288, BW=72.1MiB/s (75.6MB/s)(732MiB/10150msec) 00:26:42.567 slat (usec): min=8, max=489502, avg=2564.28, stdev=16850.22 00:26:42.567 clat (msec): min=14, max=895, avg=219.04, stdev=150.49 00:26:42.567 lat (msec): min=14, max=942, avg=221.60, stdev=152.88 00:26:42.567 clat percentiles (msec): 00:26:42.567 | 1.00th=[ 48], 5.00th=[ 79], 10.00th=[ 102], 20.00th=[ 120], 00:26:42.567 | 30.00th=[ 133], 40.00th=[ 146], 50.00th=[ 153], 60.00th=[ 176], 00:26:42.567 | 70.00th=[ 220], 80.00th=[ 326], 90.00th=[ 451], 95.00th=[ 567], 00:26:42.567 | 99.00th=[ 709], 99.50th=[ 709], 99.90th=[ 768], 99.95th=[ 768], 00:26:42.567 | 99.99th=[ 894] 00:26:42.567 bw ( KiB/s): min=22016, max=136704, per=9.05%, avg=73340.65, stdev=37322.83, samples=20 00:26:42.567 iops : min= 86, max= 534, avg=286.40, stdev=145.87, samples=20 00:26:42.567 lat (msec) : 20=0.38%, 50=0.92%, 100=7.78%, 250=63.23%, 500=20.38% 00:26:42.567 lat (msec) : 750=7.20%, 1000=0.10% 00:26:42.567 cpu : usr=0.09%, sys=0.85%, ctx=338, majf=0, minf=4097 00:26:42.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:26:42.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.567 issued rwts: total=2929,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.567 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.567 job1: (groupid=0, jobs=1): err= 0: pid=3058617: Tue Nov 19 21:16:14 2024 00:26:42.567 read: IOPS=164, BW=41.1MiB/s (43.1MB/s)(419MiB/10179msec) 00:26:42.567 slat (usec): min=12, max=288846, avg=4540.54, stdev=19695.86 00:26:42.567 clat (msec): min=37, max=1002, avg=384.19, stdev=180.27 00:26:42.567 lat (msec): min=38, max=1053, avg=388.73, stdev=183.75 00:26:42.567 clat percentiles (msec): 00:26:42.567 | 1.00th=[ 41], 5.00th=[ 92], 10.00th=[ 155], 20.00th=[ 218], 00:26:42.567 | 30.00th=[ 268], 40.00th=[ 317], 50.00th=[ 384], 60.00th=[ 426], 00:26:42.567 | 70.00th=[ 493], 80.00th=[ 558], 90.00th=[ 617], 95.00th=[ 667], 00:26:42.567 | 99.00th=[ 768], 99.50th=[ 953], 99.90th=[ 1003], 99.95th=[ 1003], 00:26:42.567 | 99.99th=[ 1003] 00:26:42.567 bw ( KiB/s): min=20480, max=91136, per=5.08%, avg=41205.45, stdev=16746.01, samples=20 00:26:42.567 iops : min= 80, max= 356, avg=160.85, stdev=65.46, samples=20 00:26:42.567 lat (msec) : 50=1.85%, 100=3.58%, 250=21.03%, 500=44.03%, 750=28.32% 00:26:42.567 lat (msec) : 1000=1.08%, 2000=0.12% 00:26:42.567 cpu : usr=0.11%, sys=0.68%, ctx=263, majf=0, minf=4097 00:26:42.567 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:26:42.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.567 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.567 issued rwts: total=1674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.567 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.567 job2: (groupid=0, jobs=1): err= 0: pid=3058618: Tue Nov 19 21:16:14 2024 00:26:42.567 read: IOPS=872, BW=218MiB/s (229MB/s)(2187MiB/10029msec) 00:26:42.567 slat (usec): min=12, max=104003, avg=1110.44, stdev=4708.34 00:26:42.567 clat (msec): min=24, max=346, avg=72.21, stdev=42.48 00:26:42.567 lat (msec): min=26, max=419, avg=73.32, stdev=43.05 00:26:42.567 clat percentiles (msec): 00:26:42.567 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 41], 00:26:42.567 | 30.00th=[ 44], 40.00th=[ 48], 50.00th=[ 54], 60.00th=[ 74], 00:26:42.567 | 70.00th=[ 86], 80.00th=[ 100], 90.00th=[ 121], 95.00th=[ 148], 00:26:42.567 | 99.00th=[ 251], 99.50th=[ 288], 99.90th=[ 321], 99.95th=[ 321], 00:26:42.567 | 99.99th=[ 347] 00:26:42.567 bw ( KiB/s): min=64512, max=413696, per=27.42%, avg=222214.20, stdev=103610.22, samples=20 00:26:42.567 iops : min= 252, max= 1616, avg=867.90, stdev=404.79, samples=20 00:26:42.567 lat (msec) : 50=45.11%, 100=36.33%, 250=17.56%, 500=1.01% 00:26:42.567 cpu : usr=0.47%, sys=2.55%, ctx=901, majf=0, minf=4097 00:26:42.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:42.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.568 issued rwts: total=8746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.568 job3: (groupid=0, jobs=1): err= 0: pid=3058619: Tue Nov 19 21:16:14 2024 00:26:42.568 read: IOPS=220, BW=55.0MiB/s (57.7MB/s)(560MiB/10183msec) 00:26:42.568 slat (usec): min=13, max=306035, avg=4488.45, stdev=21299.48 00:26:42.568 clat (msec): min=36, max=972, avg=286.05, stdev=184.73 00:26:42.568 lat (msec): min=36, max=994, avg=290.54, stdev=187.42 00:26:42.568 clat percentiles (msec): 00:26:42.568 | 1.00th=[ 41], 5.00th=[ 111], 10.00th=[ 128], 20.00th=[ 146], 00:26:42.568 | 30.00th=[ 176], 40.00th=[ 201], 50.00th=[ 234], 60.00th=[ 259], 00:26:42.568 | 70.00th=[ 305], 80.00th=[ 388], 90.00th=[ 558], 95.00th=[ 751], 00:26:42.568 | 99.00th=[ 877], 99.50th=[ 902], 99.90th=[ 919], 99.95th=[ 969], 00:26:42.568 | 99.99th=[ 969] 00:26:42.568 bw ( KiB/s): min=14848, max=118272, per=6.87%, avg=55710.40, stdev=30496.92, samples=20 00:26:42.568 iops : min= 58, max= 462, avg=217.55, stdev=119.13, samples=20 00:26:42.568 lat (msec) : 50=1.16%, 100=2.14%, 250=53.50%, 500=32.98%, 750=5.44% 00:26:42.568 lat (msec) : 1000=4.77% 00:26:42.568 cpu : usr=0.09%, sys=0.73%, ctx=184, majf=0, minf=4097 00:26:42.568 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:26:42.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.568 issued rwts: total=2241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.568 job4: (groupid=0, jobs=1): err= 0: pid=3058622: Tue Nov 19 21:16:14 2024 00:26:42.568 read: IOPS=207, BW=51.9MiB/s (54.4MB/s)(528MiB/10178msec) 00:26:42.568 slat (usec): min=13, max=358141, avg=4509.26, stdev=22876.96 00:26:42.568 clat (msec): min=89, max=1126, avg=303.79, stdev=192.30 00:26:42.568 lat (msec): min=89, max=1293, avg=308.30, stdev=195.24 00:26:42.568 clat percentiles (msec): 00:26:42.568 | 1.00th=[ 100], 5.00th=[ 131], 10.00th=[ 140], 20.00th=[ 167], 00:26:42.568 | 30.00th=[ 184], 40.00th=[ 203], 50.00th=[ 230], 60.00th=[ 259], 00:26:42.568 | 70.00th=[ 347], 80.00th=[ 460], 90.00th=[ 575], 95.00th=[ 651], 00:26:42.568 | 99.00th=[ 978], 99.50th=[ 978], 99.90th=[ 1133], 99.95th=[ 1133], 00:26:42.568 | 99.99th=[ 1133] 00:26:42.568 bw ( KiB/s): min=12263, max=101376, per=6.46%, avg=52382.80, stdev=28990.49, samples=20 00:26:42.568 iops : min= 47, max= 396, avg=204.50, stdev=113.31, samples=20 00:26:42.568 lat (msec) : 100=1.37%, 250=55.52%, 500=27.29%, 750=12.41%, 1000=2.94% 00:26:42.568 lat (msec) : 2000=0.47% 00:26:42.568 cpu : usr=0.11%, sys=0.70%, ctx=178, majf=0, minf=4097 00:26:42.568 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:26:42.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.568 issued rwts: total=2111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.568 job5: (groupid=0, jobs=1): err= 0: pid=3058623: Tue Nov 19 21:16:14 2024 00:26:42.568 read: IOPS=170, BW=42.7MiB/s (44.8MB/s)(434MiB/10157msec) 00:26:42.568 slat (usec): min=10, max=332930, avg=5311.37, stdev=21455.20 00:26:42.568 clat (msec): min=2, max=849, avg=368.81, stdev=187.31 00:26:42.568 lat (msec): min=4, max=849, avg=374.12, stdev=190.33 00:26:42.568 clat percentiles (msec): 00:26:42.568 | 1.00th=[ 45], 5.00th=[ 82], 10.00th=[ 109], 20.00th=[ 203], 00:26:42.568 | 30.00th=[ 253], 40.00th=[ 296], 50.00th=[ 334], 60.00th=[ 397], 00:26:42.568 | 70.00th=[ 510], 80.00th=[ 584], 90.00th=[ 625], 95.00th=[ 659], 00:26:42.568 | 99.00th=[ 709], 99.50th=[ 718], 99.90th=[ 785], 99.95th=[ 852], 00:26:42.568 | 99.99th=[ 852] 00:26:42.568 bw ( KiB/s): min=16384, max=85504, per=5.28%, avg=42815.25, stdev=19147.43, samples=20 00:26:42.568 iops : min= 64, max= 334, avg=167.15, stdev=74.79, samples=20 00:26:42.568 lat (msec) : 4=0.06%, 10=0.17%, 50=1.67%, 100=5.65%, 250=22.18% 00:26:42.568 lat (msec) : 500=39.63%, 750=30.36%, 1000=0.29% 00:26:42.568 cpu : usr=0.07%, sys=0.67%, ctx=264, majf=0, minf=4097 00:26:42.568 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:26:42.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.568 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.568 issued rwts: total=1736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.568 job6: (groupid=0, jobs=1): err= 0: pid=3058625: Tue Nov 19 21:16:14 2024 00:26:42.568 read: IOPS=199, BW=50.0MiB/s (52.4MB/s)(509MiB/10179msec) 00:26:42.568 slat (usec): min=8, max=421327, avg=3508.18, stdev=24507.81 00:26:42.568 clat (usec): min=1918, max=1006.6k, avg=316255.70, stdev=208816.57 00:26:42.568 lat (usec): min=1944, max=1066.6k, avg=319763.87, stdev=212150.28 00:26:42.568 clat percentiles (msec): 00:26:42.568 | 1.00th=[ 12], 5.00th=[ 31], 10.00th=[ 68], 20.00th=[ 138], 00:26:42.568 | 30.00th=[ 178], 40.00th=[ 197], 50.00th=[ 262], 60.00th=[ 326], 00:26:42.568 | 70.00th=[ 430], 80.00th=[ 567], 90.00th=[ 617], 95.00th=[ 642], 00:26:42.568 | 99.00th=[ 835], 99.50th=[ 835], 99.90th=[ 969], 99.95th=[ 969], 00:26:42.568 | 99.99th=[ 1011] 00:26:42.568 bw ( KiB/s): min= 9728, max=143360, per=6.22%, avg=50448.10, stdev=31131.47, samples=20 00:26:42.568 iops : min= 38, max= 560, avg=196.95, stdev=121.68, samples=20 00:26:42.568 lat (msec) : 2=0.05%, 4=0.10%, 10=0.44%, 20=3.64%, 50=5.36% 00:26:42.568 lat (msec) : 100=2.21%, 250=36.51%, 500=26.34%, 750=23.00%, 1000=2.31% 00:26:42.568 lat (msec) : 2000=0.05% 00:26:42.568 cpu : usr=0.07%, sys=0.57%, ctx=440, majf=0, minf=4097 00:26:42.568 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:42.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.568 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.568 issued rwts: total=2035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.568 job7: (groupid=0, jobs=1): err= 0: pid=3058628: Tue Nov 19 21:16:14 2024 00:26:42.568 read: IOPS=527, BW=132MiB/s (138MB/s)(1327MiB/10071msec) 00:26:42.568 slat (usec): min=11, max=125310, avg=1867.15, stdev=7190.69 00:26:42.568 clat (usec): min=1826, max=429990, avg=119460.55, stdev=68744.55 00:26:42.568 lat (msec): min=2, max=430, avg=121.33, stdev=69.86 00:26:42.568 clat percentiles (msec): 00:26:42.568 | 1.00th=[ 31], 5.00th=[ 58], 10.00th=[ 66], 20.00th=[ 79], 00:26:42.568 | 30.00th=[ 84], 40.00th=[ 88], 50.00th=[ 92], 60.00th=[ 102], 00:26:42.568 | 70.00th=[ 122], 80.00th=[ 159], 90.00th=[ 209], 95.00th=[ 292], 00:26:42.568 | 99.00th=[ 363], 99.50th=[ 376], 99.90th=[ 397], 99.95th=[ 422], 00:26:42.568 | 99.99th=[ 430] 00:26:42.568 bw ( KiB/s): min=42496, max=219720, per=16.56%, avg=134218.75, stdev=59573.64, samples=20 00:26:42.568 iops : min= 166, max= 858, avg=524.20, stdev=232.67, samples=20 00:26:42.568 lat (msec) : 2=0.02%, 4=0.04%, 10=0.26%, 20=0.38%, 50=1.77% 00:26:42.568 lat (msec) : 100=57.14%, 250=34.27%, 500=6.12% 00:26:42.568 cpu : usr=0.30%, sys=1.75%, ctx=637, majf=0, minf=4097 00:26:42.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:42.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.568 issued rwts: total=5308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.568 job8: (groupid=0, jobs=1): err= 0: pid=3058629: Tue Nov 19 21:16:14 2024 00:26:42.568 read: IOPS=197, BW=49.3MiB/s (51.7MB/s)(501MiB/10176msec) 00:26:42.568 slat (usec): min=9, max=377880, avg=3264.13, stdev=20451.39 00:26:42.568 clat (usec): min=1797, max=984873, avg=321161.51, stdev=214075.68 00:26:42.568 lat (msec): min=2, max=1028, avg=324.43, stdev=217.07 00:26:42.568 clat percentiles (msec): 00:26:42.568 | 1.00th=[ 8], 5.00th=[ 17], 10.00th=[ 31], 20.00th=[ 87], 00:26:42.568 | 30.00th=[ 178], 40.00th=[ 249], 50.00th=[ 292], 60.00th=[ 397], 00:26:42.568 | 70.00th=[ 493], 80.00th=[ 558], 90.00th=[ 600], 95.00th=[ 634], 00:26:42.568 | 99.00th=[ 684], 99.50th=[ 802], 99.90th=[ 802], 99.95th=[ 802], 00:26:42.568 | 99.99th=[ 986] 00:26:42.568 bw ( KiB/s): min=24576, max=182784, per=6.13%, avg=49686.50, stdev=35920.74, samples=20 00:26:42.568 iops : min= 96, max= 714, avg=193.95, stdev=140.36, samples=20 00:26:42.568 lat (msec) : 2=0.05%, 4=0.10%, 10=1.50%, 20=4.39%, 50=12.12% 00:26:42.568 lat (msec) : 100=4.54%, 250=18.45%, 500=29.23%, 750=29.08%, 1000=0.55% 00:26:42.568 cpu : usr=0.14%, sys=0.71%, ctx=580, majf=0, minf=4097 00:26:42.568 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:42.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.568 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.568 issued rwts: total=2005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.568 job9: (groupid=0, jobs=1): err= 0: pid=3058630: Tue Nov 19 21:16:14 2024 00:26:42.568 read: IOPS=160, BW=40.0MiB/s (41.9MB/s)(406MiB/10156msec) 00:26:42.568 slat (usec): min=8, max=218842, avg=4189.54, stdev=19007.32 00:26:42.568 clat (usec): min=1215, max=972804, avg=395473.18, stdev=185578.05 00:26:42.568 lat (usec): min=1241, max=972838, avg=399662.72, stdev=188267.58 00:26:42.568 clat percentiles (msec): 00:26:42.568 | 1.00th=[ 4], 5.00th=[ 54], 10.00th=[ 148], 20.00th=[ 266], 00:26:42.568 | 30.00th=[ 309], 40.00th=[ 334], 50.00th=[ 359], 60.00th=[ 430], 00:26:42.568 | 70.00th=[ 510], 80.00th=[ 584], 90.00th=[ 625], 95.00th=[ 667], 00:26:42.568 | 99.00th=[ 802], 99.50th=[ 844], 99.90th=[ 969], 99.95th=[ 969], 00:26:42.568 | 99.99th=[ 969] 00:26:42.568 bw ( KiB/s): min=20992, max=68608, per=4.93%, avg=39953.75, stdev=14390.73, samples=20 00:26:42.568 iops : min= 82, max= 268, avg=155.95, stdev=56.34, samples=20 00:26:42.568 lat (msec) : 2=0.80%, 4=0.43%, 10=0.92%, 50=2.28%, 100=3.82% 00:26:42.568 lat (msec) : 250=10.77%, 500=50.46%, 750=28.18%, 1000=2.34% 00:26:42.568 cpu : usr=0.08%, sys=0.46%, ctx=315, majf=0, minf=4097 00:26:42.568 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:26:42.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.568 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.568 issued rwts: total=1625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.569 job10: (groupid=0, jobs=1): err= 0: pid=3058632: Tue Nov 19 21:16:14 2024 00:26:42.569 read: IOPS=180, BW=45.0MiB/s (47.2MB/s)(458MiB/10156msec) 00:26:42.569 slat (usec): min=9, max=616972, avg=3992.07, stdev=23422.77 00:26:42.569 clat (msec): min=12, max=1095, avg=350.86, stdev=261.63 00:26:42.569 lat (msec): min=12, max=1330, avg=354.85, stdev=264.83 00:26:42.569 clat percentiles (msec): 00:26:42.569 | 1.00th=[ 23], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 56], 00:26:42.569 | 30.00th=[ 68], 40.00th=[ 271], 50.00th=[ 397], 60.00th=[ 477], 00:26:42.569 | 70.00th=[ 550], 80.00th=[ 592], 90.00th=[ 642], 95.00th=[ 684], 00:26:42.569 | 99.00th=[ 1099], 99.50th=[ 1099], 99.90th=[ 1099], 99.95th=[ 1099], 00:26:42.569 | 99.99th=[ 1099] 00:26:42.569 bw ( KiB/s): min=13312, max=288768, per=5.58%, avg=45200.60, stdev=58035.95, samples=20 00:26:42.569 iops : min= 52, max= 1128, avg=176.45, stdev=226.74, samples=20 00:26:42.569 lat (msec) : 20=0.82%, 50=13.55%, 100=22.08%, 250=2.46%, 500=23.61% 00:26:42.569 lat (msec) : 750=34.81%, 1000=0.66%, 2000=2.02% 00:26:42.569 cpu : usr=0.08%, sys=0.60%, ctx=262, majf=0, minf=3721 00:26:42.569 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:26:42.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.569 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.569 issued rwts: total=1830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.569 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.569 00:26:42.569 Run status group 0 (all jobs): 00:26:42.569 READ: bw=792MiB/s (830MB/s), 40.0MiB/s-218MiB/s (41.9MB/s-229MB/s), io=8060MiB (8452MB), run=10029-10183msec 00:26:42.569 00:26:42.569 Disk stats (read/write): 00:26:42.569 nvme0n1: ios=5699/0, merge=0/0, ticks=1234077/0, in_queue=1234077, util=97.05% 00:26:42.569 nvme10n1: ios=3317/0, merge=0/0, ticks=1262382/0, in_queue=1262382, util=97.37% 00:26:42.569 nvme1n1: ios=17163/0, merge=0/0, ticks=1242364/0, in_queue=1242364, util=97.56% 00:26:42.569 nvme2n1: ios=4419/0, merge=0/0, ticks=1258681/0, in_queue=1258681, util=97.79% 00:26:42.569 nvme3n1: ios=4167/0, merge=0/0, ticks=1260588/0, in_queue=1260588, util=97.86% 00:26:42.569 nvme4n1: ios=3319/0, merge=0/0, ticks=1207441/0, in_queue=1207441, util=98.18% 00:26:42.569 nvme5n1: ios=4011/0, merge=0/0, ticks=1259594/0, in_queue=1259594, util=98.38% 00:26:42.569 nvme6n1: ios=10432/0, merge=0/0, ticks=1234871/0, in_queue=1234871, util=98.45% 00:26:42.569 nvme7n1: ios=3997/0, merge=0/0, ticks=1273329/0, in_queue=1273329, util=98.91% 00:26:42.569 nvme8n1: ios=3123/0, merge=0/0, ticks=1224172/0, in_queue=1224172, util=99.10% 00:26:42.569 nvme9n1: ios=3527/0, merge=0/0, ticks=1222906/0, in_queue=1222906, util=99.24% 00:26:42.569 21:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:42.569 [global] 00:26:42.569 thread=1 00:26:42.569 invalidate=1 00:26:42.569 rw=randwrite 00:26:42.569 time_based=1 00:26:42.569 runtime=10 00:26:42.569 ioengine=libaio 00:26:42.569 direct=1 00:26:42.569 bs=262144 00:26:42.569 iodepth=64 00:26:42.569 norandommap=1 00:26:42.569 numjobs=1 00:26:42.569 00:26:42.569 [job0] 00:26:42.569 filename=/dev/nvme0n1 00:26:42.569 [job1] 00:26:42.569 filename=/dev/nvme10n1 00:26:42.569 [job2] 00:26:42.569 filename=/dev/nvme1n1 00:26:42.569 [job3] 00:26:42.569 filename=/dev/nvme2n1 00:26:42.569 [job4] 00:26:42.569 filename=/dev/nvme3n1 00:26:42.569 [job5] 00:26:42.569 filename=/dev/nvme4n1 00:26:42.569 [job6] 00:26:42.569 filename=/dev/nvme5n1 00:26:42.569 [job7] 00:26:42.569 filename=/dev/nvme6n1 00:26:42.569 [job8] 00:26:42.569 filename=/dev/nvme7n1 00:26:42.569 [job9] 00:26:42.569 filename=/dev/nvme8n1 00:26:42.569 [job10] 00:26:42.569 filename=/dev/nvme9n1 00:26:42.569 Could not set queue depth (nvme0n1) 00:26:42.569 Could not set queue depth (nvme10n1) 00:26:42.569 Could not set queue depth (nvme1n1) 00:26:42.569 Could not set queue depth (nvme2n1) 00:26:42.569 Could not set queue depth (nvme3n1) 00:26:42.569 Could not set queue depth (nvme4n1) 00:26:42.569 Could not set queue depth (nvme5n1) 00:26:42.569 Could not set queue depth (nvme6n1) 00:26:42.569 Could not set queue depth (nvme7n1) 00:26:42.569 Could not set queue depth (nvme8n1) 00:26:42.569 Could not set queue depth (nvme9n1) 00:26:42.569 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.569 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.569 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.569 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.569 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.569 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.569 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.569 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.569 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.569 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.569 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:42.569 fio-3.35 00:26:42.569 Starting 11 threads 00:26:52.545 00:26:52.545 job0: (groupid=0, jobs=1): err= 0: pid=3059497: Tue Nov 19 21:16:25 2024 00:26:52.545 write: IOPS=236, BW=59.0MiB/s (61.9MB/s)(610MiB/10330msec); 0 zone resets 00:26:52.545 slat (usec): min=17, max=90222, avg=3231.75, stdev=8461.63 00:26:52.545 clat (usec): min=1653, max=839794, avg=267762.26, stdev=147327.86 00:26:52.545 lat (msec): min=2, max=839, avg=270.99, stdev=148.92 00:26:52.545 clat percentiles (msec): 00:26:52.545 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 28], 20.00th=[ 159], 00:26:52.545 | 30.00th=[ 203], 40.00th=[ 222], 50.00th=[ 253], 60.00th=[ 305], 00:26:52.545 | 70.00th=[ 334], 80.00th=[ 372], 90.00th=[ 489], 95.00th=[ 531], 00:26:52.545 | 99.00th=[ 642], 99.50th=[ 743], 99.90th=[ 810], 99.95th=[ 844], 00:26:52.545 | 99.99th=[ 844] 00:26:52.545 bw ( KiB/s): min=30658, max=167936, per=7.88%, avg=60754.50, stdev=31669.23, samples=20 00:26:52.545 iops : min= 119, max= 656, avg=237.20, stdev=123.77, samples=20 00:26:52.545 lat (msec) : 2=0.04%, 4=0.12%, 10=3.12%, 20=3.90%, 50=3.53% 00:26:52.545 lat (msec) : 100=1.76%, 250=37.12%, 500=41.39%, 750=8.61%, 1000=0.41% 00:26:52.545 cpu : usr=0.64%, sys=0.68%, ctx=1134, majf=0, minf=1 00:26:52.545 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:52.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.545 issued rwts: total=0,2438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.545 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.545 job1: (groupid=0, jobs=1): err= 0: pid=3059509: Tue Nov 19 21:16:25 2024 00:26:52.545 write: IOPS=222, BW=55.7MiB/s (58.4MB/s)(576MiB/10330msec); 0 zone resets 00:26:52.545 slat (usec): min=19, max=155736, avg=2781.97, stdev=9336.36 00:26:52.545 clat (msec): min=2, max=837, avg=283.98, stdev=165.43 00:26:52.545 lat (msec): min=3, max=837, avg=286.76, stdev=167.40 00:26:52.545 clat percentiles (msec): 00:26:52.545 | 1.00th=[ 20], 5.00th=[ 45], 10.00th=[ 72], 20.00th=[ 125], 00:26:52.545 | 30.00th=[ 180], 40.00th=[ 226], 50.00th=[ 268], 60.00th=[ 313], 00:26:52.545 | 70.00th=[ 359], 80.00th=[ 447], 90.00th=[ 514], 95.00th=[ 584], 00:26:52.545 | 99.00th=[ 651], 99.50th=[ 735], 99.90th=[ 802], 99.95th=[ 835], 00:26:52.545 | 99.99th=[ 835] 00:26:52.545 bw ( KiB/s): min=22528, max=125952, per=7.43%, avg=57306.15, stdev=26574.21, samples=20 00:26:52.545 iops : min= 88, max= 492, avg=223.70, stdev=103.85, samples=20 00:26:52.545 lat (msec) : 4=0.09%, 10=0.17%, 20=1.26%, 50=4.69%, 100=9.64% 00:26:52.545 lat (msec) : 250=31.44%, 500=39.56%, 750=12.72%, 1000=0.43% 00:26:52.545 cpu : usr=0.69%, sys=0.81%, ctx=1340, majf=0, minf=1 00:26:52.545 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:52.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.545 issued rwts: total=0,2303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.545 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.545 job2: (groupid=0, jobs=1): err= 0: pid=3059510: Tue Nov 19 21:16:25 2024 00:26:52.545 write: IOPS=335, BW=83.9MiB/s (88.0MB/s)(851MiB/10143msec); 0 zone resets 00:26:52.545 slat (usec): min=17, max=147401, avg=2189.31, stdev=6720.82 00:26:52.545 clat (usec): min=978, max=597278, avg=188296.03, stdev=130968.66 00:26:52.545 lat (usec): min=1004, max=600407, avg=190485.33, stdev=132452.87 00:26:52.545 clat percentiles (msec): 00:26:52.545 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 28], 20.00th=[ 68], 00:26:52.545 | 30.00th=[ 74], 40.00th=[ 125], 50.00th=[ 188], 60.00th=[ 224], 00:26:52.545 | 70.00th=[ 243], 80.00th=[ 317], 90.00th=[ 376], 95.00th=[ 405], 00:26:52.545 | 99.00th=[ 510], 99.50th=[ 550], 99.90th=[ 584], 99.95th=[ 592], 00:26:52.545 | 99.99th=[ 600] 00:26:52.545 bw ( KiB/s): min=41472, max=241664, per=11.09%, avg=85526.55, stdev=43861.80, samples=20 00:26:52.545 iops : min= 162, max= 944, avg=334.00, stdev=171.36, samples=20 00:26:52.545 lat (usec) : 1000=0.06% 00:26:52.545 lat (msec) : 2=0.23%, 4=0.62%, 10=2.58%, 20=4.43%, 50=6.58% 00:26:52.545 lat (msec) : 100=23.26%, 250=33.36%, 500=27.69%, 750=1.17% 00:26:52.545 cpu : usr=0.99%, sys=1.01%, ctx=1785, majf=0, minf=1 00:26:52.545 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:26:52.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.545 issued rwts: total=0,3405,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.545 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.545 job3: (groupid=0, jobs=1): err= 0: pid=3059511: Tue Nov 19 21:16:25 2024 00:26:52.545 write: IOPS=231, BW=57.9MiB/s (60.8MB/s)(598MiB/10320msec); 0 zone resets 00:26:52.545 slat (usec): min=15, max=60075, avg=3378.40, stdev=8664.84 00:26:52.545 clat (usec): min=1316, max=829642, avg=272538.48, stdev=178832.06 00:26:52.545 lat (usec): min=1357, max=862887, avg=275916.88, stdev=181100.45 00:26:52.545 clat percentiles (msec): 00:26:52.545 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 58], 20.00th=[ 111], 00:26:52.545 | 30.00th=[ 131], 40.00th=[ 180], 50.00th=[ 239], 60.00th=[ 338], 00:26:52.545 | 70.00th=[ 384], 80.00th=[ 447], 90.00th=[ 535], 95.00th=[ 558], 00:26:52.545 | 99.00th=[ 634], 99.50th=[ 768], 99.90th=[ 827], 99.95th=[ 827], 00:26:52.545 | 99.99th=[ 827] 00:26:52.545 bw ( KiB/s): min=28672, max=138240, per=7.73%, avg=59609.65, stdev=31429.85, samples=20 00:26:52.545 iops : min= 112, max= 540, avg=232.70, stdev=122.74, samples=20 00:26:52.545 lat (msec) : 2=0.42%, 4=1.42%, 10=2.93%, 20=1.09%, 50=1.25% 00:26:52.545 lat (msec) : 100=10.37%, 250=34.82%, 500=32.44%, 750=14.76%, 1000=0.50% 00:26:52.545 cpu : usr=0.63%, sys=0.69%, ctx=1092, majf=0, minf=1 00:26:52.545 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:52.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.545 issued rwts: total=0,2392,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.545 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.545 job4: (groupid=0, jobs=1): err= 0: pid=3059512: Tue Nov 19 21:16:25 2024 00:26:52.545 write: IOPS=283, BW=70.8MiB/s (74.2MB/s)(731MiB/10327msec); 0 zone resets 00:26:52.545 slat (usec): min=16, max=99870, avg=2859.60, stdev=7673.90 00:26:52.545 clat (msec): min=2, max=846, avg=222.94, stdev=170.67 00:26:52.545 lat (msec): min=2, max=846, avg=225.80, stdev=172.83 00:26:52.545 clat percentiles (msec): 00:26:52.545 | 1.00th=[ 6], 5.00th=[ 27], 10.00th=[ 39], 20.00th=[ 65], 00:26:52.545 | 30.00th=[ 73], 40.00th=[ 107], 50.00th=[ 178], 60.00th=[ 271], 00:26:52.545 | 70.00th=[ 347], 80.00th=[ 388], 90.00th=[ 485], 95.00th=[ 518], 00:26:52.545 | 99.00th=[ 617], 99.50th=[ 709], 99.90th=[ 818], 99.95th=[ 844], 00:26:52.545 | 99.99th=[ 844] 00:26:52.545 bw ( KiB/s): min=28614, max=189819, per=9.50%, avg=73269.30, stdev=53466.14, samples=20 00:26:52.545 iops : min= 111, max= 741, avg=286.05, stdev=208.89, samples=20 00:26:52.545 lat (msec) : 4=0.27%, 10=2.29%, 20=1.23%, 50=10.46%, 100=24.72% 00:26:52.545 lat (msec) : 250=19.08%, 500=34.12%, 750=7.49%, 1000=0.34% 00:26:52.545 cpu : usr=0.75%, sys=0.94%, ctx=1358, majf=0, minf=1 00:26:52.545 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:26:52.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.545 issued rwts: total=0,2925,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.545 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.545 job5: (groupid=0, jobs=1): err= 0: pid=3059513: Tue Nov 19 21:16:25 2024 00:26:52.545 write: IOPS=261, BW=65.3MiB/s (68.4MB/s)(674MiB/10325msec); 0 zone resets 00:26:52.545 slat (usec): min=15, max=473849, avg=2482.61, stdev=12851.79 00:26:52.545 clat (usec): min=1901, max=1001.8k, avg=241683.09, stdev=187402.58 00:26:52.545 lat (usec): min=1938, max=1001.9k, avg=244165.69, stdev=189342.83 00:26:52.545 clat percentiles (msec): 00:26:52.545 | 1.00th=[ 5], 5.00th=[ 19], 10.00th=[ 25], 20.00th=[ 75], 00:26:52.545 | 30.00th=[ 127], 40.00th=[ 146], 50.00th=[ 203], 60.00th=[ 257], 00:26:52.545 | 70.00th=[ 326], 80.00th=[ 397], 90.00th=[ 481], 95.00th=[ 600], 00:26:52.545 | 99.00th=[ 869], 99.50th=[ 911], 99.90th=[ 995], 99.95th=[ 1003], 00:26:52.545 | 99.99th=[ 1003] 00:26:52.545 bw ( KiB/s): min=15360, max=148183, per=8.74%, avg=67375.50, stdev=34633.03, samples=20 00:26:52.545 iops : min= 60, max= 578, avg=263.05, stdev=135.26, samples=20 00:26:52.545 lat (msec) : 2=0.04%, 4=0.85%, 10=0.85%, 20=4.23%, 50=10.57% 00:26:52.545 lat (msec) : 100=6.86%, 250=35.98%, 500=32.53%, 750=6.05%, 1000=2.00% 00:26:52.545 lat (msec) : 2000=0.04% 00:26:52.545 cpu : usr=0.71%, sys=0.99%, ctx=1652, majf=0, minf=1 00:26:52.545 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:52.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.545 issued rwts: total=0,2696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.545 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.545 job6: (groupid=0, jobs=1): err= 0: pid=3059520: Tue Nov 19 21:16:25 2024 00:26:52.545 write: IOPS=290, BW=72.6MiB/s (76.1MB/s)(738MiB/10165msec); 0 zone resets 00:26:52.545 slat (usec): min=21, max=97371, avg=2120.09, stdev=6110.97 00:26:52.545 clat (usec): min=1117, max=633159, avg=218217.54, stdev=138829.60 00:26:52.545 lat (usec): min=1159, max=633207, avg=220337.64, stdev=140183.90 00:26:52.545 clat percentiles (msec): 00:26:52.545 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 21], 20.00th=[ 85], 00:26:52.545 | 30.00th=[ 148], 40.00th=[ 199], 50.00th=[ 228], 60.00th=[ 241], 00:26:52.545 | 70.00th=[ 275], 80.00th=[ 317], 90.00th=[ 393], 95.00th=[ 464], 00:26:52.545 | 99.00th=[ 609], 99.50th=[ 625], 99.90th=[ 634], 99.95th=[ 634], 00:26:52.545 | 99.99th=[ 634] 00:26:52.545 bw ( KiB/s): min=36864, max=147456, per=9.58%, avg=73859.55, stdev=28465.86, samples=20 00:26:52.545 iops : min= 144, max= 576, avg=288.40, stdev=111.21, samples=20 00:26:52.545 lat (msec) : 2=0.41%, 4=1.36%, 10=3.36%, 20=4.64%, 50=7.02% 00:26:52.545 lat (msec) : 100=5.93%, 250=41.76%, 500=31.59%, 750=3.93% 00:26:52.546 cpu : usr=0.82%, sys=0.93%, ctx=1800, majf=0, minf=1 00:26:52.546 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:52.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.546 issued rwts: total=0,2950,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.546 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.546 job7: (groupid=0, jobs=1): err= 0: pid=3059521: Tue Nov 19 21:16:25 2024 00:26:52.546 write: IOPS=353, BW=88.4MiB/s (92.7MB/s)(896MiB/10134msec); 0 zone resets 00:26:52.546 slat (usec): min=21, max=131634, avg=2193.13, stdev=6754.62 00:26:52.546 clat (usec): min=1382, max=620628, avg=178628.91, stdev=127882.72 00:26:52.546 lat (usec): min=1451, max=627344, avg=180822.04, stdev=129422.38 00:26:52.546 clat percentiles (msec): 00:26:52.546 | 1.00th=[ 7], 5.00th=[ 29], 10.00th=[ 56], 20.00th=[ 63], 00:26:52.546 | 30.00th=[ 65], 40.00th=[ 92], 50.00th=[ 157], 60.00th=[ 211], 00:26:52.546 | 70.00th=[ 243], 80.00th=[ 309], 90.00th=[ 372], 95.00th=[ 401], 00:26:52.546 | 99.00th=[ 493], 99.50th=[ 531], 99.90th=[ 609], 99.95th=[ 617], 00:26:52.546 | 99.99th=[ 617] 00:26:52.546 bw ( KiB/s): min=32191, max=233472, per=11.68%, avg=90086.45, stdev=59636.37, samples=20 00:26:52.546 iops : min= 125, max= 912, avg=351.80, stdev=232.93, samples=20 00:26:52.546 lat (msec) : 2=0.08%, 4=0.28%, 10=1.17%, 20=1.84%, 50=6.31% 00:26:52.546 lat (msec) : 100=32.59%, 250=29.83%, 500=27.01%, 750=0.89% 00:26:52.546 cpu : usr=0.97%, sys=1.01%, ctx=1673, majf=0, minf=1 00:26:52.546 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:26:52.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.546 issued rwts: total=0,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.546 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.546 job8: (groupid=0, jobs=1): err= 0: pid=3059522: Tue Nov 19 21:16:25 2024 00:26:52.546 write: IOPS=288, BW=72.1MiB/s (75.6MB/s)(733MiB/10161msec); 0 zone resets 00:26:52.546 slat (usec): min=16, max=108207, avg=2557.44, stdev=7121.79 00:26:52.546 clat (usec): min=1520, max=621521, avg=219273.29, stdev=133787.08 00:26:52.546 lat (usec): min=1916, max=640163, avg=221830.72, stdev=135439.55 00:26:52.546 clat percentiles (msec): 00:26:52.546 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 32], 20.00th=[ 94], 00:26:52.546 | 30.00th=[ 144], 40.00th=[ 178], 50.00th=[ 218], 60.00th=[ 241], 00:26:52.546 | 70.00th=[ 284], 80.00th=[ 338], 90.00th=[ 409], 95.00th=[ 443], 00:26:52.546 | 99.00th=[ 558], 99.50th=[ 592], 99.90th=[ 609], 99.95th=[ 609], 00:26:52.546 | 99.99th=[ 625] 00:26:52.546 bw ( KiB/s): min=31680, max=222208, per=9.52%, avg=73374.50, stdev=41409.70, samples=20 00:26:52.546 iops : min= 123, max= 868, avg=286.50, stdev=161.80, samples=20 00:26:52.546 lat (msec) : 2=0.07%, 4=0.51%, 10=4.06%, 20=3.14%, 50=5.63% 00:26:52.546 lat (msec) : 100=7.00%, 250=45.22%, 500=31.95%, 750=2.42% 00:26:52.546 cpu : usr=0.73%, sys=0.86%, ctx=1583, majf=0, minf=1 00:26:52.546 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:26:52.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.546 issued rwts: total=0,2930,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.546 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.546 job9: (groupid=0, jobs=1): err= 0: pid=3059523: Tue Nov 19 21:16:25 2024 00:26:52.546 write: IOPS=245, BW=61.3MiB/s (64.3MB/s)(633MiB/10323msec); 0 zone resets 00:26:52.546 slat (usec): min=15, max=229217, avg=3022.59, stdev=9211.94 00:26:52.546 clat (usec): min=1211, max=846321, avg=257825.89, stdev=172926.92 00:26:52.546 lat (usec): min=1258, max=846377, avg=260848.48, stdev=175266.38 00:26:52.546 clat percentiles (msec): 00:26:52.546 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 31], 20.00th=[ 63], 00:26:52.546 | 30.00th=[ 125], 40.00th=[ 194], 50.00th=[ 271], 60.00th=[ 334], 00:26:52.546 | 70.00th=[ 363], 80.00th=[ 414], 90.00th=[ 493], 95.00th=[ 518], 00:26:52.546 | 99.00th=[ 651], 99.50th=[ 743], 99.90th=[ 818], 99.95th=[ 844], 00:26:52.546 | 99.99th=[ 844] 00:26:52.546 bw ( KiB/s): min=29184, max=175104, per=8.19%, avg=63174.00, stdev=42592.97, samples=20 00:26:52.546 iops : min= 114, max= 684, avg=246.70, stdev=166.43, samples=20 00:26:52.546 lat (msec) : 2=0.32%, 4=0.83%, 10=3.40%, 20=3.24%, 50=8.14% 00:26:52.546 lat (msec) : 100=10.08%, 250=21.10%, 500=43.46%, 750=9.05%, 1000=0.40% 00:26:52.546 cpu : usr=0.67%, sys=0.79%, ctx=1441, majf=0, minf=1 00:26:52.546 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:52.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.546 issued rwts: total=0,2531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.546 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.546 job10: (groupid=0, jobs=1): err= 0: pid=3059525: Tue Nov 19 21:16:25 2024 00:26:52.546 write: IOPS=288, BW=72.0MiB/s (75.5MB/s)(745MiB/10337msec); 0 zone resets 00:26:52.546 slat (usec): min=15, max=135066, avg=2133.74, stdev=8439.05 00:26:52.546 clat (usec): min=1285, max=863463, avg=219794.71, stdev=178804.02 00:26:52.546 lat (usec): min=1362, max=863518, avg=221928.44, stdev=181053.91 00:26:52.546 clat percentiles (msec): 00:26:52.546 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 21], 20.00th=[ 59], 00:26:52.546 | 30.00th=[ 71], 40.00th=[ 112], 50.00th=[ 192], 60.00th=[ 253], 00:26:52.546 | 70.00th=[ 326], 80.00th=[ 372], 90.00th=[ 439], 95.00th=[ 592], 00:26:52.546 | 99.00th=[ 676], 99.50th=[ 726], 99.90th=[ 835], 99.95th=[ 860], 00:26:52.546 | 99.99th=[ 860] 00:26:52.546 bw ( KiB/s): min=26624, max=235049, per=9.67%, avg=74589.70, stdev=52225.11, samples=20 00:26:52.546 iops : min= 104, max= 918, avg=291.25, stdev=204.05, samples=20 00:26:52.546 lat (msec) : 2=0.20%, 4=0.57%, 10=3.02%, 20=5.71%, 50=8.86% 00:26:52.546 lat (msec) : 100=19.23%, 250=22.09%, 500=32.53%, 750=7.32%, 1000=0.47% 00:26:52.546 cpu : usr=0.83%, sys=1.06%, ctx=1953, majf=0, minf=1 00:26:52.546 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:52.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:52.546 issued rwts: total=0,2979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.546 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:52.546 00:26:52.546 Run status group 0 (all jobs): 00:26:52.546 WRITE: bw=753MiB/s (790MB/s), 55.7MiB/s-88.4MiB/s (58.4MB/s-92.7MB/s), io=7783MiB (8161MB), run=10134-10337msec 00:26:52.546 00:26:52.546 Disk stats (read/write): 00:26:52.546 nvme0n1: ios=53/4802, merge=0/0, ticks=6400/1227839, in_queue=1234239, util=100.00% 00:26:52.546 nvme10n1: ios=47/4532, merge=0/0, ticks=3231/1219295, in_queue=1222526, util=100.00% 00:26:52.546 nvme1n1: ios=43/6563, merge=0/0, ticks=2222/1209472, in_queue=1211694, util=100.00% 00:26:52.546 nvme2n1: ios=35/4718, merge=0/0, ticks=32/1226007, in_queue=1226039, util=97.99% 00:26:52.546 nvme3n1: ios=0/5779, merge=0/0, ticks=0/1224086, in_queue=1224086, util=97.94% 00:26:52.546 nvme4n1: ios=44/5323, merge=0/0, ticks=5399/1156026, in_queue=1161425, util=100.00% 00:26:52.546 nvme5n1: ios=39/5739, merge=0/0, ticks=560/1217797, in_queue=1218357, util=100.00% 00:26:52.546 nvme6n1: ios=48/6860, merge=0/0, ticks=789/1211672, in_queue=1212461, util=100.00% 00:26:52.546 nvme7n1: ios=0/5700, merge=0/0, ticks=0/1211987, in_queue=1211987, util=98.85% 00:26:52.546 nvme8n1: ios=0/4991, merge=0/0, ticks=0/1228600, in_queue=1228600, util=99.00% 00:26:52.546 nvme9n1: ios=0/5867, merge=0/0, ticks=0/1226425, in_queue=1226425, util=99.15% 00:26:52.546 21:16:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:52.546 21:16:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:52.546 21:16:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.546 21:16:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:52.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:52.546 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:52.546 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:52.546 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:52.546 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:52.546 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:52.546 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:52.546 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:52.546 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:52.546 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.546 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.546 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.546 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.546 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:52.805 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:52.805 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:52.805 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:52.805 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:52.805 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:52.805 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:52.805 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:52.805 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:52.805 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:52.805 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.805 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.805 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.805 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.805 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:53.064 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:53.064 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:53.064 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:53.064 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:53.064 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:53.064 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:53.064 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:53.064 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:53.064 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:53.064 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.064 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.064 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.064 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.064 21:16:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:53.322 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:53.322 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:53.322 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:53.322 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:53.322 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:53.580 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:53.580 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:53.580 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:53.580 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:53.580 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.580 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.580 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.580 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.580 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:53.839 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:53.839 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:53.839 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:53.839 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:53.839 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:53.839 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:53.839 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:53.839 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:53.839 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:53.839 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.839 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.839 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.839 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.839 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:54.097 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:54.097 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:54.097 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:54.097 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:54.097 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:54.097 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:54.097 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:54.355 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:54.355 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:54.355 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.355 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:54.355 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.355 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.355 21:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:54.355 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:54.355 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:54.355 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:54.355 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:54.355 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:54.355 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:54.355 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:54.355 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:54.355 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:54.355 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.355 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:54.355 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.355 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.355 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:54.613 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:54.613 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:54.613 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:54.613 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:54.613 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:54.613 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:54.613 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:54.613 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:54.613 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:54.613 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.613 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:54.613 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.613 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.613 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:54.869 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:54.869 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:54.869 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:54.869 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:54.869 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:54.869 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:54.869 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:54.869 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:54.869 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:54.869 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.869 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:54.869 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.869 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.869 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:55.127 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:55.127 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:55.127 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:55.127 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:55.127 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:55.127 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:55.127 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:55.127 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:55.127 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:55.127 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.127 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:55.127 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.127 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:55.127 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:55.128 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:55.128 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:55.128 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:55.128 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:55.128 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:55.128 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:55.128 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:55.128 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:55.128 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:55.128 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.128 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:55.128 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.128 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:55.386 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:55.386 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:55.386 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:55.386 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:55.386 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:55.386 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:55.386 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:55.386 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:55.386 rmmod nvme_tcp 00:26:55.386 rmmod nvme_fabrics 00:26:55.386 rmmod nvme_keyring 00:26:55.386 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:55.386 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:55.386 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:55.386 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 3054236 ']' 00:26:55.386 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 3054236 00:26:55.386 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 3054236 ']' 00:26:55.386 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 3054236 00:26:55.386 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:55.386 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:55.386 21:16:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3054236 00:26:55.386 21:16:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:55.386 21:16:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:55.386 21:16:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3054236' 00:26:55.386 killing process with pid 3054236 00:26:55.386 21:16:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 3054236 00:26:55.386 21:16:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 3054236 00:26:58.671 21:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:58.671 21:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:58.671 21:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:58.671 21:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:58.671 21:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:58.671 21:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:58.671 21:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:58.671 21:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:58.671 21:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:58.671 21:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.671 21:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.671 21:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.576 21:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:00.576 00:27:00.576 real 1m5.317s 00:27:00.576 user 3m49.457s 00:27:00.576 sys 0m15.936s 00:27:00.576 21:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:00.576 21:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:00.576 ************************************ 00:27:00.576 END TEST nvmf_multiconnection 00:27:00.576 ************************************ 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:00.576 ************************************ 00:27:00.576 START TEST nvmf_initiator_timeout 00:27:00.576 ************************************ 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:00.576 * Looking for test storage... 00:27:00.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:27:00.576 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:00.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.577 --rc genhtml_branch_coverage=1 00:27:00.577 --rc genhtml_function_coverage=1 00:27:00.577 --rc genhtml_legend=1 00:27:00.577 --rc geninfo_all_blocks=1 00:27:00.577 --rc geninfo_unexecuted_blocks=1 00:27:00.577 00:27:00.577 ' 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:00.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.577 --rc genhtml_branch_coverage=1 00:27:00.577 --rc genhtml_function_coverage=1 00:27:00.577 --rc genhtml_legend=1 00:27:00.577 --rc geninfo_all_blocks=1 00:27:00.577 --rc geninfo_unexecuted_blocks=1 00:27:00.577 00:27:00.577 ' 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:00.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.577 --rc genhtml_branch_coverage=1 00:27:00.577 --rc genhtml_function_coverage=1 00:27:00.577 --rc genhtml_legend=1 00:27:00.577 --rc geninfo_all_blocks=1 00:27:00.577 --rc geninfo_unexecuted_blocks=1 00:27:00.577 00:27:00.577 ' 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:00.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.577 --rc genhtml_branch_coverage=1 00:27:00.577 --rc genhtml_function_coverage=1 00:27:00.577 --rc genhtml_legend=1 00:27:00.577 --rc geninfo_all_blocks=1 00:27:00.577 --rc geninfo_unexecuted_blocks=1 00:27:00.577 00:27:00.577 ' 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:00.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:00.577 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:00.578 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:00.578 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:00.578 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:00.578 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:00.578 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:00.578 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:00.578 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:00.578 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.578 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.578 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.578 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:00.578 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:00.578 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:27:00.578 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:02.480 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:02.480 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:02.480 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:02.480 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:02.480 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:02.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:02.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:27:02.739 00:27:02.739 --- 10.0.0.2 ping statistics --- 00:27:02.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.739 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:02.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:02.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:27:02.739 00:27:02.739 --- 10.0.0.1 ping statistics --- 00:27:02.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.739 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=3063106 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 3063106 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 3063106 ']' 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:02.739 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:02.739 [2024-11-19 21:16:36.511739] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:27:02.739 [2024-11-19 21:16:36.511881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.998 [2024-11-19 21:16:36.666489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:03.286 [2024-11-19 21:16:36.814703] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:03.286 [2024-11-19 21:16:36.814787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:03.286 [2024-11-19 21:16:36.814813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:03.286 [2024-11-19 21:16:36.814837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:03.286 [2024-11-19 21:16:36.814856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:03.286 [2024-11-19 21:16:36.817749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.286 [2024-11-19 21:16:36.817812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:03.286 [2024-11-19 21:16:36.817863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.286 [2024-11-19 21:16:36.817871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.876 Malloc0 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.876 Delay0 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.876 [2024-11-19 21:16:37.627215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:03.876 [2024-11-19 21:16:37.656851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.876 21:16:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:04.811 21:16:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:04.811 21:16:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:27:04.811 21:16:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:04.811 21:16:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:04.811 21:16:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:27:06.710 21:16:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:06.710 21:16:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:06.710 21:16:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:06.710 21:16:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:06.710 21:16:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:06.710 21:16:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:27:06.710 21:16:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3063543 00:27:06.710 21:16:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:06.710 21:16:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:06.710 [global] 00:27:06.710 thread=1 00:27:06.710 invalidate=1 00:27:06.710 rw=write 00:27:06.710 time_based=1 00:27:06.710 runtime=60 00:27:06.710 ioengine=libaio 00:27:06.710 direct=1 00:27:06.710 bs=4096 00:27:06.710 iodepth=1 00:27:06.710 norandommap=0 00:27:06.710 numjobs=1 00:27:06.710 00:27:06.710 verify_dump=1 00:27:06.710 verify_backlog=512 00:27:06.710 verify_state_save=0 00:27:06.710 do_verify=1 00:27:06.710 verify=crc32c-intel 00:27:06.710 [job0] 00:27:06.710 filename=/dev/nvme0n1 00:27:06.710 Could not set queue depth (nvme0n1) 00:27:06.969 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:06.969 fio-3.35 00:27:06.969 Starting 1 thread 00:27:10.250 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:10.250 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.250 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:10.250 true 00:27:10.250 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.250 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:10.250 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.250 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:10.250 true 00:27:10.250 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.250 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:10.250 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.250 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:10.250 true 00:27:10.250 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.250 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:10.250 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.250 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:10.250 true 00:27:10.250 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.250 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:12.778 21:16:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:12.778 21:16:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.778 21:16:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:12.778 true 00:27:12.778 21:16:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.778 21:16:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:12.778 21:16:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.778 21:16:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:12.778 true 00:27:12.778 21:16:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.778 21:16:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:12.778 21:16:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.778 21:16:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:12.778 true 00:27:12.778 21:16:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.778 21:16:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:12.778 21:16:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.778 21:16:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:12.778 true 00:27:12.778 21:16:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.778 21:16:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:12.778 21:16:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3063543 00:28:08.992 00:28:08.992 job0: (groupid=0, jobs=1): err= 0: pid=3063618: Tue Nov 19 21:17:40 2024 00:28:08.992 read: IOPS=7, BW=30.6KiB/s (31.4kB/s)(1840KiB/60040msec) 00:28:08.992 slat (usec): min=12, max=8792, avg=42.24, stdev=408.99 00:28:08.992 clat (usec): min=425, max=41260k, avg=130142.33, stdev=1921894.11 00:28:08.992 lat (usec): min=446, max=41260k, avg=130184.57, stdev=1921893.00 00:28:08.992 clat percentiles (usec): 00:28:08.992 | 1.00th=[ 668], 5.00th=[ 40633], 10.00th=[ 41157], 00:28:08.992 | 20.00th=[ 41157], 30.00th=[ 41157], 40.00th=[ 41157], 00:28:08.992 | 50.00th=[ 41157], 60.00th=[ 41157], 70.00th=[ 41157], 00:28:08.992 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:28:08.992 | 99.00th=[ 41157], 99.50th=[ 41681], 99.90th=[17112761], 00:28:08.992 | 99.95th=[17112761], 99.99th=[17112761] 00:28:08.992 write: IOPS=8, BW=34.1KiB/s (34.9kB/s)(2048KiB/60040msec); 0 zone resets 00:28:08.992 slat (nsec): min=7384, max=69154, avg=16797.69, stdev=7603.83 00:28:08.992 clat (usec): min=213, max=573, avg=275.32, stdev=39.56 00:28:08.992 lat (usec): min=224, max=594, avg=292.12, stdev=41.76 00:28:08.992 clat percentiles (usec): 00:28:08.992 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 245], 00:28:08.992 | 30.00th=[ 251], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 281], 00:28:08.992 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 343], 00:28:08.992 | 99.00th=[ 433], 99.50th=[ 461], 99.90th=[ 578], 99.95th=[ 578], 00:28:08.992 | 99.99th=[ 578] 00:28:08.992 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:28:08.992 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:28:08.992 lat (usec) : 250=14.81%, 500=38.07%, 750=0.31% 00:28:08.992 lat (msec) : 50=46.71%, >=2000=0.10% 00:28:08.992 cpu : usr=0.03%, sys=0.02%, ctx=973, majf=0, minf=1 00:28:08.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:08.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.992 issued rwts: total=460,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:08.992 00:28:08.992 Run status group 0 (all jobs): 00:28:08.992 READ: bw=30.6KiB/s (31.4kB/s), 30.6KiB/s-30.6KiB/s (31.4kB/s-31.4kB/s), io=1840KiB (1884kB), run=60040-60040msec 00:28:08.992 WRITE: bw=34.1KiB/s (34.9kB/s), 34.1KiB/s-34.1KiB/s (34.9kB/s-34.9kB/s), io=2048KiB (2097kB), run=60040-60040msec 00:28:08.992 00:28:08.992 Disk stats (read/write): 00:28:08.992 nvme0n1: ios=555/512, merge=0/0, ticks=18595/132, in_queue=18727, util=99.84% 00:28:08.992 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:08.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:08.992 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:08.992 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:28:08.992 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:08.992 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:08.992 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:08.992 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:08.992 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:28:08.992 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:08.992 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:08.992 nvmf hotplug test: fio successful as expected 00:28:08.992 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:08.992 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.992 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:08.992 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.992 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:08.992 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:08.992 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:08.992 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:08.992 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:08.992 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:08.993 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:08.993 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:08.993 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:08.993 rmmod nvme_tcp 00:28:08.993 rmmod nvme_fabrics 00:28:08.993 rmmod nvme_keyring 00:28:08.993 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:08.993 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:08.993 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:08.993 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 3063106 ']' 00:28:08.993 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 3063106 00:28:08.993 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 3063106 ']' 00:28:08.993 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 3063106 00:28:08.993 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:28:08.993 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:08.993 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3063106 00:28:08.993 21:17:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:08.993 21:17:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:08.993 21:17:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3063106' 00:28:08.993 killing process with pid 3063106 00:28:08.993 21:17:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 3063106 00:28:08.993 21:17:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 3063106 00:28:08.993 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:08.993 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:08.993 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:08.993 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:08.993 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:28:08.993 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:08.993 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:28:08.993 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:08.993 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:08.993 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.993 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:08.993 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.897 21:17:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:10.897 00:28:10.897 real 1m10.304s 00:28:10.897 user 4m16.844s 00:28:10.897 sys 0m6.531s 00:28:10.897 21:17:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:10.897 21:17:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:10.897 ************************************ 00:28:10.897 END TEST nvmf_initiator_timeout 00:28:10.897 ************************************ 00:28:10.897 21:17:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:10.897 21:17:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:10.897 21:17:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:10.897 21:17:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:10.897 21:17:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.799 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:12.799 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:12.800 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:12.800 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:12.800 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:12.800 ************************************ 00:28:12.800 START TEST nvmf_perf_adq 00:28:12.800 ************************************ 00:28:12.800 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:13.060 * Looking for test storage... 00:28:13.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:13.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.060 --rc genhtml_branch_coverage=1 00:28:13.060 --rc genhtml_function_coverage=1 00:28:13.060 --rc genhtml_legend=1 00:28:13.060 --rc geninfo_all_blocks=1 00:28:13.060 --rc geninfo_unexecuted_blocks=1 00:28:13.060 00:28:13.060 ' 00:28:13.060 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:13.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.060 --rc genhtml_branch_coverage=1 00:28:13.060 --rc genhtml_function_coverage=1 00:28:13.060 --rc genhtml_legend=1 00:28:13.060 --rc geninfo_all_blocks=1 00:28:13.060 --rc geninfo_unexecuted_blocks=1 00:28:13.060 00:28:13.060 ' 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:13.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.061 --rc genhtml_branch_coverage=1 00:28:13.061 --rc genhtml_function_coverage=1 00:28:13.061 --rc genhtml_legend=1 00:28:13.061 --rc geninfo_all_blocks=1 00:28:13.061 --rc geninfo_unexecuted_blocks=1 00:28:13.061 00:28:13.061 ' 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:13.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.061 --rc genhtml_branch_coverage=1 00:28:13.061 --rc genhtml_function_coverage=1 00:28:13.061 --rc genhtml_legend=1 00:28:13.061 --rc geninfo_all_blocks=1 00:28:13.061 --rc geninfo_unexecuted_blocks=1 00:28:13.061 00:28:13.061 ' 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:13.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:13.061 21:17:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:14.960 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:14.960 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.960 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.961 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.961 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.961 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.961 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:14.961 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:14.961 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.961 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.961 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.961 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.961 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.961 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.961 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.961 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.961 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:14.961 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:14.961 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.961 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:14.961 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:14.961 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:14.961 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:14.961 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:14.961 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:14.961 21:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:15.530 21:17:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:18.124 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:23.398 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:23.398 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:23.398 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:23.398 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:23.398 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:23.398 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:23.398 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.398 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.398 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.398 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:23.398 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:23.398 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:23.398 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:23.399 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:23.399 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:23.399 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:23.399 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:23.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:23.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:28:23.399 00:28:23.399 --- 10.0.0.2 ping statistics --- 00:28:23.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.399 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:23.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:23.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:28:23.399 00:28:23.399 --- 10.0.0.1 ping statistics --- 00:28:23.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.399 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:28:23.399 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:23.400 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:23.400 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:23.400 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:23.400 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:23.400 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:23.400 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:23.400 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:23.400 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:23.400 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:23.400 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:23.400 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:23.400 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:23.400 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3075398 00:28:23.400 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:23.400 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3075398 00:28:23.400 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3075398 ']' 00:28:23.400 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.400 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:23.400 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.400 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:23.400 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:23.400 [2024-11-19 21:17:56.823768] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:28:23.400 [2024-11-19 21:17:56.823910] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:23.400 [2024-11-19 21:17:56.978445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:23.400 [2024-11-19 21:17:57.108374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:23.400 [2024-11-19 21:17:57.108438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:23.400 [2024-11-19 21:17:57.108459] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:23.400 [2024-11-19 21:17:57.108479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:23.400 [2024-11-19 21:17:57.108494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:23.400 [2024-11-19 21:17:57.110974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.400 [2024-11-19 21:17:57.114103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:23.400 [2024-11-19 21:17:57.114142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.400 [2024-11-19 21:17:57.114143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:24.335 21:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:24.335 21:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:24.335 21:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:24.335 21:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:24.335 21:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.335 21:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.335 21:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:24.335 21:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:24.335 21:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:24.335 21:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.335 21:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.335 21:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.335 21:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:24.335 21:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:24.335 21:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.335 21:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.335 21:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.335 21:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:24.335 21:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.335 21:17:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.593 [2024-11-19 21:17:58.239730] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.593 Malloc1 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.593 [2024-11-19 21:17:58.356851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3075560 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:24.593 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:27.125 21:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:27.125 21:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.125 21:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:27.125 21:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.125 21:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:27.125 "tick_rate": 2700000000, 00:28:27.125 "poll_groups": [ 00:28:27.125 { 00:28:27.125 "name": "nvmf_tgt_poll_group_000", 00:28:27.125 "admin_qpairs": 1, 00:28:27.125 "io_qpairs": 1, 00:28:27.125 "current_admin_qpairs": 1, 00:28:27.125 "current_io_qpairs": 1, 00:28:27.125 "pending_bdev_io": 0, 00:28:27.125 "completed_nvme_io": 16354, 00:28:27.125 "transports": [ 00:28:27.125 { 00:28:27.125 "trtype": "TCP" 00:28:27.125 } 00:28:27.125 ] 00:28:27.125 }, 00:28:27.125 { 00:28:27.125 "name": "nvmf_tgt_poll_group_001", 00:28:27.125 "admin_qpairs": 0, 00:28:27.125 "io_qpairs": 1, 00:28:27.125 "current_admin_qpairs": 0, 00:28:27.125 "current_io_qpairs": 1, 00:28:27.125 "pending_bdev_io": 0, 00:28:27.125 "completed_nvme_io": 15910, 00:28:27.125 "transports": [ 00:28:27.125 { 00:28:27.125 "trtype": "TCP" 00:28:27.125 } 00:28:27.125 ] 00:28:27.125 }, 00:28:27.125 { 00:28:27.125 "name": "nvmf_tgt_poll_group_002", 00:28:27.125 "admin_qpairs": 0, 00:28:27.125 "io_qpairs": 1, 00:28:27.125 "current_admin_qpairs": 0, 00:28:27.125 "current_io_qpairs": 1, 00:28:27.125 "pending_bdev_io": 0, 00:28:27.125 "completed_nvme_io": 16044, 00:28:27.125 "transports": [ 00:28:27.125 { 00:28:27.125 "trtype": "TCP" 00:28:27.125 } 00:28:27.125 ] 00:28:27.125 }, 00:28:27.125 { 00:28:27.125 "name": "nvmf_tgt_poll_group_003", 00:28:27.125 "admin_qpairs": 0, 00:28:27.125 "io_qpairs": 1, 00:28:27.125 "current_admin_qpairs": 0, 00:28:27.125 "current_io_qpairs": 1, 00:28:27.125 "pending_bdev_io": 0, 00:28:27.125 "completed_nvme_io": 16666, 00:28:27.125 "transports": [ 00:28:27.125 { 00:28:27.125 "trtype": "TCP" 00:28:27.125 } 00:28:27.125 ] 00:28:27.125 } 00:28:27.125 ] 00:28:27.125 }' 00:28:27.125 21:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:27.125 21:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:27.125 21:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:27.125 21:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:27.125 21:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3075560 00:28:35.237 Initializing NVMe Controllers 00:28:35.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:35.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:35.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:35.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:35.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:35.237 Initialization complete. Launching workers. 00:28:35.237 ======================================================== 00:28:35.237 Latency(us) 00:28:35.237 Device Information : IOPS MiB/s Average min max 00:28:35.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9031.10 35.28 7086.37 2855.15 10995.26 00:28:35.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8645.10 33.77 7402.43 3307.51 11601.06 00:28:35.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8661.70 33.83 7392.26 3155.23 12063.61 00:28:35.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8842.20 34.54 7240.55 3373.26 11672.60 00:28:35.237 ======================================================== 00:28:35.237 Total : 35180.09 137.42 7278.10 2855.15 12063.61 00:28:35.237 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:35.237 rmmod nvme_tcp 00:28:35.237 rmmod nvme_fabrics 00:28:35.237 rmmod nvme_keyring 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3075398 ']' 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3075398 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3075398 ']' 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3075398 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3075398 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3075398' 00:28:35.237 killing process with pid 3075398 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3075398 00:28:35.237 21:18:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3075398 00:28:36.614 21:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:36.614 21:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:36.614 21:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:36.614 21:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:36.614 21:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:36.614 21:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:36.614 21:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:36.614 21:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:36.614 21:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:36.614 21:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.614 21:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:36.614 21:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.520 21:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:38.520 21:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:38.520 21:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:38.520 21:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:39.088 21:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:41.618 21:18:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.892 21:18:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.892 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.892 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.892 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.892 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.892 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.892 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:46.892 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.892 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:46.893 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:46.893 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:46.893 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:46.893 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:46.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:28:46.893 00:28:46.893 --- 10.0.0.2 ping statistics --- 00:28:46.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.893 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:46.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:28:46.893 00:28:46.893 --- 10.0.0.1 ping statistics --- 00:28:46.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.893 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:46.893 net.core.busy_poll = 1 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:46.893 net.core.busy_read = 1 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:46.893 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:46.894 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:46.894 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:46.894 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:46.894 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3078928 00:28:46.894 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:46.894 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3078928 00:28:46.894 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3078928 ']' 00:28:46.894 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.894 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:46.894 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.894 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:46.894 21:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:46.894 [2024-11-19 21:18:20.447320] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:28:46.894 [2024-11-19 21:18:20.447502] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.894 [2024-11-19 21:18:20.595000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:47.152 [2024-11-19 21:18:20.738402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.152 [2024-11-19 21:18:20.738488] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.152 [2024-11-19 21:18:20.738515] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:47.152 [2024-11-19 21:18:20.738539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:47.152 [2024-11-19 21:18:20.738559] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.152 [2024-11-19 21:18:20.741439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.152 [2024-11-19 21:18:20.741513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:47.152 [2024-11-19 21:18:20.741609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.152 [2024-11-19 21:18:20.741615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:47.718 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:47.718 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:47.718 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:47.718 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:47.718 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:47.718 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:47.718 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:47.718 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:47.718 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:47.718 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.718 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:47.718 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.718 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:47.718 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:47.718 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.718 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:47.718 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.718 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:47.719 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.719 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.286 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.286 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:48.286 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.286 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.286 [2024-11-19 21:18:21.795508] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.286 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.286 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:48.286 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.286 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.286 Malloc1 00:28:48.286 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.286 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:48.286 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.287 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.287 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.287 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:48.287 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.287 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.287 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.287 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:48.287 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.287 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.287 [2024-11-19 21:18:21.914154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:48.287 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.287 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3079206 00:28:48.287 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:48.287 21:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:50.189 21:18:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:50.189 21:18:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.189 21:18:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:50.189 21:18:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.189 21:18:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:50.189 "tick_rate": 2700000000, 00:28:50.189 "poll_groups": [ 00:28:50.189 { 00:28:50.189 "name": "nvmf_tgt_poll_group_000", 00:28:50.189 "admin_qpairs": 1, 00:28:50.189 "io_qpairs": 2, 00:28:50.189 "current_admin_qpairs": 1, 00:28:50.189 "current_io_qpairs": 2, 00:28:50.189 "pending_bdev_io": 0, 00:28:50.189 "completed_nvme_io": 19439, 00:28:50.189 "transports": [ 00:28:50.189 { 00:28:50.189 "trtype": "TCP" 00:28:50.189 } 00:28:50.189 ] 00:28:50.189 }, 00:28:50.189 { 00:28:50.189 "name": "nvmf_tgt_poll_group_001", 00:28:50.189 "admin_qpairs": 0, 00:28:50.189 "io_qpairs": 2, 00:28:50.189 "current_admin_qpairs": 0, 00:28:50.189 "current_io_qpairs": 2, 00:28:50.189 "pending_bdev_io": 0, 00:28:50.189 "completed_nvme_io": 19357, 00:28:50.189 "transports": [ 00:28:50.189 { 00:28:50.189 "trtype": "TCP" 00:28:50.189 } 00:28:50.189 ] 00:28:50.189 }, 00:28:50.189 { 00:28:50.189 "name": "nvmf_tgt_poll_group_002", 00:28:50.189 "admin_qpairs": 0, 00:28:50.189 "io_qpairs": 0, 00:28:50.189 "current_admin_qpairs": 0, 00:28:50.189 "current_io_qpairs": 0, 00:28:50.189 "pending_bdev_io": 0, 00:28:50.189 "completed_nvme_io": 0, 00:28:50.189 "transports": [ 00:28:50.189 { 00:28:50.189 "trtype": "TCP" 00:28:50.189 } 00:28:50.189 ] 00:28:50.189 }, 00:28:50.189 { 00:28:50.189 "name": "nvmf_tgt_poll_group_003", 00:28:50.189 "admin_qpairs": 0, 00:28:50.189 "io_qpairs": 0, 00:28:50.189 "current_admin_qpairs": 0, 00:28:50.189 "current_io_qpairs": 0, 00:28:50.189 "pending_bdev_io": 0, 00:28:50.189 "completed_nvme_io": 0, 00:28:50.189 "transports": [ 00:28:50.189 { 00:28:50.189 "trtype": "TCP" 00:28:50.189 } 00:28:50.189 ] 00:28:50.189 } 00:28:50.189 ] 00:28:50.189 }' 00:28:50.189 21:18:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:50.189 21:18:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:50.447 21:18:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:50.447 21:18:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:50.447 21:18:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3079206 00:28:58.566 Initializing NVMe Controllers 00:28:58.566 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:58.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:58.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:58.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:58.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:58.566 Initialization complete. Launching workers. 00:28:58.566 ======================================================== 00:28:58.566 Latency(us) 00:28:58.566 Device Information : IOPS MiB/s Average min max 00:28:58.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5157.00 20.14 12422.78 2384.67 58792.30 00:28:58.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4670.40 18.24 13708.63 2578.26 57423.55 00:28:58.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5379.80 21.01 11898.34 2483.81 57787.64 00:28:58.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5833.80 22.79 10975.05 2380.19 56745.69 00:28:58.566 ======================================================== 00:28:58.566 Total : 21040.99 82.19 12172.71 2380.19 58792.30 00:28:58.566 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:58.566 rmmod nvme_tcp 00:28:58.566 rmmod nvme_fabrics 00:28:58.566 rmmod nvme_keyring 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3078928 ']' 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3078928 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3078928 ']' 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3078928 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3078928 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3078928' 00:28:58.566 killing process with pid 3078928 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3078928 00:28:58.566 21:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3078928 00:28:59.943 21:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:59.943 21:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:59.943 21:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:59.943 21:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:59.943 21:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:59.943 21:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:59.943 21:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:59.943 21:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:59.943 21:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:59.943 21:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.943 21:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.943 21:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.847 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:01.848 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:29:01.848 00:29:01.848 real 0m49.013s 00:29:01.848 user 2m54.313s 00:29:01.848 sys 0m9.571s 00:29:01.848 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:01.848 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.848 ************************************ 00:29:01.848 END TEST nvmf_perf_adq 00:29:01.848 ************************************ 00:29:01.848 21:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:01.848 21:18:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:01.848 21:18:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:01.848 21:18:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:01.848 ************************************ 00:29:01.848 START TEST nvmf_shutdown 00:29:01.848 ************************************ 00:29:01.848 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:02.108 * Looking for test storage... 00:29:02.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:02.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.108 --rc genhtml_branch_coverage=1 00:29:02.108 --rc genhtml_function_coverage=1 00:29:02.108 --rc genhtml_legend=1 00:29:02.108 --rc geninfo_all_blocks=1 00:29:02.108 --rc geninfo_unexecuted_blocks=1 00:29:02.108 00:29:02.108 ' 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:02.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.108 --rc genhtml_branch_coverage=1 00:29:02.108 --rc genhtml_function_coverage=1 00:29:02.108 --rc genhtml_legend=1 00:29:02.108 --rc geninfo_all_blocks=1 00:29:02.108 --rc geninfo_unexecuted_blocks=1 00:29:02.108 00:29:02.108 ' 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:02.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.108 --rc genhtml_branch_coverage=1 00:29:02.108 --rc genhtml_function_coverage=1 00:29:02.108 --rc genhtml_legend=1 00:29:02.108 --rc geninfo_all_blocks=1 00:29:02.108 --rc geninfo_unexecuted_blocks=1 00:29:02.108 00:29:02.108 ' 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:02.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.108 --rc genhtml_branch_coverage=1 00:29:02.108 --rc genhtml_function_coverage=1 00:29:02.108 --rc genhtml_legend=1 00:29:02.108 --rc geninfo_all_blocks=1 00:29:02.108 --rc geninfo_unexecuted_blocks=1 00:29:02.108 00:29:02.108 ' 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.108 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:02.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:02.109 ************************************ 00:29:02.109 START TEST nvmf_shutdown_tc1 00:29:02.109 ************************************ 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:02.109 21:18:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:04.641 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:04.641 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:04.641 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:04.642 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:04.642 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:04.642 21:18:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:04.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:04.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:29:04.642 00:29:04.642 --- 10.0.0.2 ping statistics --- 00:29:04.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.642 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:04.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:04.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:29:04.642 00:29:04.642 --- 10.0.0.1 ping statistics --- 00:29:04.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.642 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3082500 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3082500 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3082500 ']' 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:04.642 21:18:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:04.642 [2024-11-19 21:18:38.155221] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:29:04.642 [2024-11-19 21:18:38.155397] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:04.642 [2024-11-19 21:18:38.302980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:04.899 [2024-11-19 21:18:38.441256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:04.899 [2024-11-19 21:18:38.441329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:04.899 [2024-11-19 21:18:38.441355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:04.899 [2024-11-19 21:18:38.441380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:04.899 [2024-11-19 21:18:38.441399] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:04.899 [2024-11-19 21:18:38.444147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:04.899 [2024-11-19 21:18:38.444251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:04.899 [2024-11-19 21:18:38.444297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.899 [2024-11-19 21:18:38.444303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:05.465 [2024-11-19 21:18:39.131315] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.465 21:18:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:05.465 Malloc1 00:29:05.723 [2024-11-19 21:18:39.277346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.723 Malloc2 00:29:05.723 Malloc3 00:29:05.982 Malloc4 00:29:05.982 Malloc5 00:29:05.982 Malloc6 00:29:06.240 Malloc7 00:29:06.240 Malloc8 00:29:06.499 Malloc9 00:29:06.499 Malloc10 00:29:06.499 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.499 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:06.499 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:06.499 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:06.499 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3082816 00:29:06.499 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3082816 /var/tmp/bdevperf.sock 00:29:06.499 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3082816 ']' 00:29:06.499 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:06.499 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:06.499 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:06.499 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.499 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:06.499 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:06.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.500 { 00:29:06.500 "params": { 00:29:06.500 "name": "Nvme$subsystem", 00:29:06.500 "trtype": "$TEST_TRANSPORT", 00:29:06.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.500 "adrfam": "ipv4", 00:29:06.500 "trsvcid": "$NVMF_PORT", 00:29:06.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.500 "hdgst": ${hdgst:-false}, 00:29:06.500 "ddgst": ${ddgst:-false} 00:29:06.500 }, 00:29:06.500 "method": "bdev_nvme_attach_controller" 00:29:06.500 } 00:29:06.500 EOF 00:29:06.500 )") 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.500 { 00:29:06.500 "params": { 00:29:06.500 "name": "Nvme$subsystem", 00:29:06.500 "trtype": "$TEST_TRANSPORT", 00:29:06.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.500 "adrfam": "ipv4", 00:29:06.500 "trsvcid": "$NVMF_PORT", 00:29:06.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.500 "hdgst": ${hdgst:-false}, 00:29:06.500 "ddgst": ${ddgst:-false} 00:29:06.500 }, 00:29:06.500 "method": "bdev_nvme_attach_controller" 00:29:06.500 } 00:29:06.500 EOF 00:29:06.500 )") 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.500 { 00:29:06.500 "params": { 00:29:06.500 "name": "Nvme$subsystem", 00:29:06.500 "trtype": "$TEST_TRANSPORT", 00:29:06.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.500 "adrfam": "ipv4", 00:29:06.500 "trsvcid": "$NVMF_PORT", 00:29:06.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.500 "hdgst": ${hdgst:-false}, 00:29:06.500 "ddgst": ${ddgst:-false} 00:29:06.500 }, 00:29:06.500 "method": "bdev_nvme_attach_controller" 00:29:06.500 } 00:29:06.500 EOF 00:29:06.500 )") 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.500 { 00:29:06.500 "params": { 00:29:06.500 "name": "Nvme$subsystem", 00:29:06.500 "trtype": "$TEST_TRANSPORT", 00:29:06.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.500 "adrfam": "ipv4", 00:29:06.500 "trsvcid": "$NVMF_PORT", 00:29:06.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.500 "hdgst": ${hdgst:-false}, 00:29:06.500 "ddgst": ${ddgst:-false} 00:29:06.500 }, 00:29:06.500 "method": "bdev_nvme_attach_controller" 00:29:06.500 } 00:29:06.500 EOF 00:29:06.500 )") 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.500 { 00:29:06.500 "params": { 00:29:06.500 "name": "Nvme$subsystem", 00:29:06.500 "trtype": "$TEST_TRANSPORT", 00:29:06.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.500 "adrfam": "ipv4", 00:29:06.500 "trsvcid": "$NVMF_PORT", 00:29:06.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.500 "hdgst": ${hdgst:-false}, 00:29:06.500 "ddgst": ${ddgst:-false} 00:29:06.500 }, 00:29:06.500 "method": "bdev_nvme_attach_controller" 00:29:06.500 } 00:29:06.500 EOF 00:29:06.500 )") 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.500 { 00:29:06.500 "params": { 00:29:06.500 "name": "Nvme$subsystem", 00:29:06.500 "trtype": "$TEST_TRANSPORT", 00:29:06.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.500 "adrfam": "ipv4", 00:29:06.500 "trsvcid": "$NVMF_PORT", 00:29:06.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.500 "hdgst": ${hdgst:-false}, 00:29:06.500 "ddgst": ${ddgst:-false} 00:29:06.500 }, 00:29:06.500 "method": "bdev_nvme_attach_controller" 00:29:06.500 } 00:29:06.500 EOF 00:29:06.500 )") 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.500 { 00:29:06.500 "params": { 00:29:06.500 "name": "Nvme$subsystem", 00:29:06.500 "trtype": "$TEST_TRANSPORT", 00:29:06.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.500 "adrfam": "ipv4", 00:29:06.500 "trsvcid": "$NVMF_PORT", 00:29:06.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.500 "hdgst": ${hdgst:-false}, 00:29:06.500 "ddgst": ${ddgst:-false} 00:29:06.500 }, 00:29:06.500 "method": "bdev_nvme_attach_controller" 00:29:06.500 } 00:29:06.500 EOF 00:29:06.500 )") 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.500 { 00:29:06.500 "params": { 00:29:06.500 "name": "Nvme$subsystem", 00:29:06.500 "trtype": "$TEST_TRANSPORT", 00:29:06.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.500 "adrfam": "ipv4", 00:29:06.500 "trsvcid": "$NVMF_PORT", 00:29:06.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.500 "hdgst": ${hdgst:-false}, 00:29:06.500 "ddgst": ${ddgst:-false} 00:29:06.500 }, 00:29:06.500 "method": "bdev_nvme_attach_controller" 00:29:06.500 } 00:29:06.500 EOF 00:29:06.500 )") 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.500 { 00:29:06.500 "params": { 00:29:06.500 "name": "Nvme$subsystem", 00:29:06.500 "trtype": "$TEST_TRANSPORT", 00:29:06.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.500 "adrfam": "ipv4", 00:29:06.500 "trsvcid": "$NVMF_PORT", 00:29:06.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.500 "hdgst": ${hdgst:-false}, 00:29:06.500 "ddgst": ${ddgst:-false} 00:29:06.500 }, 00:29:06.500 "method": "bdev_nvme_attach_controller" 00:29:06.500 } 00:29:06.500 EOF 00:29:06.500 )") 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.500 { 00:29:06.500 "params": { 00:29:06.500 "name": "Nvme$subsystem", 00:29:06.500 "trtype": "$TEST_TRANSPORT", 00:29:06.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.500 "adrfam": "ipv4", 00:29:06.500 "trsvcid": "$NVMF_PORT", 00:29:06.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.500 "hdgst": ${hdgst:-false}, 00:29:06.500 "ddgst": ${ddgst:-false} 00:29:06.500 }, 00:29:06.500 "method": "bdev_nvme_attach_controller" 00:29:06.500 } 00:29:06.500 EOF 00:29:06.500 )") 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:06.500 21:18:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:06.500 "params": { 00:29:06.500 "name": "Nvme1", 00:29:06.500 "trtype": "tcp", 00:29:06.501 "traddr": "10.0.0.2", 00:29:06.501 "adrfam": "ipv4", 00:29:06.501 "trsvcid": "4420", 00:29:06.501 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:06.501 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:06.501 "hdgst": false, 00:29:06.501 "ddgst": false 00:29:06.501 }, 00:29:06.501 "method": "bdev_nvme_attach_controller" 00:29:06.501 },{ 00:29:06.501 "params": { 00:29:06.501 "name": "Nvme2", 00:29:06.501 "trtype": "tcp", 00:29:06.501 "traddr": "10.0.0.2", 00:29:06.501 "adrfam": "ipv4", 00:29:06.501 "trsvcid": "4420", 00:29:06.501 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:06.501 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:06.501 "hdgst": false, 00:29:06.501 "ddgst": false 00:29:06.501 }, 00:29:06.501 "method": "bdev_nvme_attach_controller" 00:29:06.501 },{ 00:29:06.501 "params": { 00:29:06.501 "name": "Nvme3", 00:29:06.501 "trtype": "tcp", 00:29:06.501 "traddr": "10.0.0.2", 00:29:06.501 "adrfam": "ipv4", 00:29:06.501 "trsvcid": "4420", 00:29:06.501 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:06.501 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:06.501 "hdgst": false, 00:29:06.501 "ddgst": false 00:29:06.501 }, 00:29:06.501 "method": "bdev_nvme_attach_controller" 00:29:06.501 },{ 00:29:06.501 "params": { 00:29:06.501 "name": "Nvme4", 00:29:06.501 "trtype": "tcp", 00:29:06.501 "traddr": "10.0.0.2", 00:29:06.501 "adrfam": "ipv4", 00:29:06.501 "trsvcid": "4420", 00:29:06.501 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:06.501 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:06.501 "hdgst": false, 00:29:06.501 "ddgst": false 00:29:06.501 }, 00:29:06.501 "method": "bdev_nvme_attach_controller" 00:29:06.501 },{ 00:29:06.501 "params": { 00:29:06.501 "name": "Nvme5", 00:29:06.501 "trtype": "tcp", 00:29:06.501 "traddr": "10.0.0.2", 00:29:06.501 "adrfam": "ipv4", 00:29:06.501 "trsvcid": "4420", 00:29:06.501 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:06.501 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:06.501 "hdgst": false, 00:29:06.501 "ddgst": false 00:29:06.501 }, 00:29:06.501 "method": "bdev_nvme_attach_controller" 00:29:06.501 },{ 00:29:06.501 "params": { 00:29:06.501 "name": "Nvme6", 00:29:06.501 "trtype": "tcp", 00:29:06.501 "traddr": "10.0.0.2", 00:29:06.501 "adrfam": "ipv4", 00:29:06.501 "trsvcid": "4420", 00:29:06.501 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:06.501 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:06.501 "hdgst": false, 00:29:06.501 "ddgst": false 00:29:06.501 }, 00:29:06.501 "method": "bdev_nvme_attach_controller" 00:29:06.501 },{ 00:29:06.501 "params": { 00:29:06.501 "name": "Nvme7", 00:29:06.501 "trtype": "tcp", 00:29:06.501 "traddr": "10.0.0.2", 00:29:06.501 "adrfam": "ipv4", 00:29:06.501 "trsvcid": "4420", 00:29:06.501 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:06.501 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:06.501 "hdgst": false, 00:29:06.501 "ddgst": false 00:29:06.501 }, 00:29:06.501 "method": "bdev_nvme_attach_controller" 00:29:06.501 },{ 00:29:06.501 "params": { 00:29:06.501 "name": "Nvme8", 00:29:06.501 "trtype": "tcp", 00:29:06.501 "traddr": "10.0.0.2", 00:29:06.501 "adrfam": "ipv4", 00:29:06.501 "trsvcid": "4420", 00:29:06.501 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:06.501 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:06.501 "hdgst": false, 00:29:06.501 "ddgst": false 00:29:06.501 }, 00:29:06.501 "method": "bdev_nvme_attach_controller" 00:29:06.501 },{ 00:29:06.501 "params": { 00:29:06.501 "name": "Nvme9", 00:29:06.501 "trtype": "tcp", 00:29:06.501 "traddr": "10.0.0.2", 00:29:06.501 "adrfam": "ipv4", 00:29:06.501 "trsvcid": "4420", 00:29:06.501 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:06.501 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:06.501 "hdgst": false, 00:29:06.501 "ddgst": false 00:29:06.501 }, 00:29:06.501 "method": "bdev_nvme_attach_controller" 00:29:06.501 },{ 00:29:06.501 "params": { 00:29:06.501 "name": "Nvme10", 00:29:06.501 "trtype": "tcp", 00:29:06.501 "traddr": "10.0.0.2", 00:29:06.501 "adrfam": "ipv4", 00:29:06.501 "trsvcid": "4420", 00:29:06.501 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:06.501 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:06.501 "hdgst": false, 00:29:06.501 "ddgst": false 00:29:06.501 }, 00:29:06.501 "method": "bdev_nvme_attach_controller" 00:29:06.501 }' 00:29:06.759 [2024-11-19 21:18:40.315184] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:29:06.759 [2024-11-19 21:18:40.315341] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:06.759 [2024-11-19 21:18:40.491451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.017 [2024-11-19 21:18:40.622243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.544 21:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:09.544 21:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:09.544 21:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:09.544 21:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.544 21:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:09.544 21:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.544 21:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3082816 00:29:09.544 21:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:29:09.544 21:18:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:29:10.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3082816 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3082500 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:10.478 { 00:29:10.478 "params": { 00:29:10.478 "name": "Nvme$subsystem", 00:29:10.478 "trtype": "$TEST_TRANSPORT", 00:29:10.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.478 "adrfam": "ipv4", 00:29:10.478 "trsvcid": "$NVMF_PORT", 00:29:10.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.478 "hdgst": ${hdgst:-false}, 00:29:10.478 "ddgst": ${ddgst:-false} 00:29:10.478 }, 00:29:10.478 "method": "bdev_nvme_attach_controller" 00:29:10.478 } 00:29:10.478 EOF 00:29:10.478 )") 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:10.478 { 00:29:10.478 "params": { 00:29:10.478 "name": "Nvme$subsystem", 00:29:10.478 "trtype": "$TEST_TRANSPORT", 00:29:10.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.478 "adrfam": "ipv4", 00:29:10.478 "trsvcid": "$NVMF_PORT", 00:29:10.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.478 "hdgst": ${hdgst:-false}, 00:29:10.478 "ddgst": ${ddgst:-false} 00:29:10.478 }, 00:29:10.478 "method": "bdev_nvme_attach_controller" 00:29:10.478 } 00:29:10.478 EOF 00:29:10.478 )") 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:10.478 { 00:29:10.478 "params": { 00:29:10.478 "name": "Nvme$subsystem", 00:29:10.478 "trtype": "$TEST_TRANSPORT", 00:29:10.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.478 "adrfam": "ipv4", 00:29:10.478 "trsvcid": "$NVMF_PORT", 00:29:10.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.478 "hdgst": ${hdgst:-false}, 00:29:10.478 "ddgst": ${ddgst:-false} 00:29:10.478 }, 00:29:10.478 "method": "bdev_nvme_attach_controller" 00:29:10.478 } 00:29:10.478 EOF 00:29:10.478 )") 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:10.478 { 00:29:10.478 "params": { 00:29:10.478 "name": "Nvme$subsystem", 00:29:10.478 "trtype": "$TEST_TRANSPORT", 00:29:10.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.478 "adrfam": "ipv4", 00:29:10.478 "trsvcid": "$NVMF_PORT", 00:29:10.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.478 "hdgst": ${hdgst:-false}, 00:29:10.478 "ddgst": ${ddgst:-false} 00:29:10.478 }, 00:29:10.478 "method": "bdev_nvme_attach_controller" 00:29:10.478 } 00:29:10.478 EOF 00:29:10.478 )") 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:10.478 { 00:29:10.478 "params": { 00:29:10.478 "name": "Nvme$subsystem", 00:29:10.478 "trtype": "$TEST_TRANSPORT", 00:29:10.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.478 "adrfam": "ipv4", 00:29:10.478 "trsvcid": "$NVMF_PORT", 00:29:10.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.478 "hdgst": ${hdgst:-false}, 00:29:10.478 "ddgst": ${ddgst:-false} 00:29:10.478 }, 00:29:10.478 "method": "bdev_nvme_attach_controller" 00:29:10.478 } 00:29:10.478 EOF 00:29:10.478 )") 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:10.478 { 00:29:10.478 "params": { 00:29:10.478 "name": "Nvme$subsystem", 00:29:10.478 "trtype": "$TEST_TRANSPORT", 00:29:10.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.478 "adrfam": "ipv4", 00:29:10.478 "trsvcid": "$NVMF_PORT", 00:29:10.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.478 "hdgst": ${hdgst:-false}, 00:29:10.478 "ddgst": ${ddgst:-false} 00:29:10.478 }, 00:29:10.478 "method": "bdev_nvme_attach_controller" 00:29:10.478 } 00:29:10.478 EOF 00:29:10.478 )") 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:10.478 { 00:29:10.478 "params": { 00:29:10.478 "name": "Nvme$subsystem", 00:29:10.478 "trtype": "$TEST_TRANSPORT", 00:29:10.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.478 "adrfam": "ipv4", 00:29:10.478 "trsvcid": "$NVMF_PORT", 00:29:10.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.478 "hdgst": ${hdgst:-false}, 00:29:10.478 "ddgst": ${ddgst:-false} 00:29:10.478 }, 00:29:10.478 "method": "bdev_nvme_attach_controller" 00:29:10.478 } 00:29:10.478 EOF 00:29:10.478 )") 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:10.478 { 00:29:10.478 "params": { 00:29:10.478 "name": "Nvme$subsystem", 00:29:10.478 "trtype": "$TEST_TRANSPORT", 00:29:10.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.478 "adrfam": "ipv4", 00:29:10.478 "trsvcid": "$NVMF_PORT", 00:29:10.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.478 "hdgst": ${hdgst:-false}, 00:29:10.478 "ddgst": ${ddgst:-false} 00:29:10.478 }, 00:29:10.478 "method": "bdev_nvme_attach_controller" 00:29:10.478 } 00:29:10.478 EOF 00:29:10.478 )") 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:10.478 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:10.478 { 00:29:10.478 "params": { 00:29:10.478 "name": "Nvme$subsystem", 00:29:10.478 "trtype": "$TEST_TRANSPORT", 00:29:10.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.478 "adrfam": "ipv4", 00:29:10.478 "trsvcid": "$NVMF_PORT", 00:29:10.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.478 "hdgst": ${hdgst:-false}, 00:29:10.478 "ddgst": ${ddgst:-false} 00:29:10.478 }, 00:29:10.478 "method": "bdev_nvme_attach_controller" 00:29:10.478 } 00:29:10.478 EOF 00:29:10.478 )") 00:29:10.479 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:10.479 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:10.479 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:10.479 { 00:29:10.479 "params": { 00:29:10.479 "name": "Nvme$subsystem", 00:29:10.479 "trtype": "$TEST_TRANSPORT", 00:29:10.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.479 "adrfam": "ipv4", 00:29:10.479 "trsvcid": "$NVMF_PORT", 00:29:10.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.479 "hdgst": ${hdgst:-false}, 00:29:10.479 "ddgst": ${ddgst:-false} 00:29:10.479 }, 00:29:10.479 "method": "bdev_nvme_attach_controller" 00:29:10.479 } 00:29:10.479 EOF 00:29:10.479 )") 00:29:10.479 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:10.479 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:10.479 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:10.479 21:18:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:10.479 "params": { 00:29:10.479 "name": "Nvme1", 00:29:10.479 "trtype": "tcp", 00:29:10.479 "traddr": "10.0.0.2", 00:29:10.479 "adrfam": "ipv4", 00:29:10.479 "trsvcid": "4420", 00:29:10.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:10.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:10.479 "hdgst": false, 00:29:10.479 "ddgst": false 00:29:10.479 }, 00:29:10.479 "method": "bdev_nvme_attach_controller" 00:29:10.479 },{ 00:29:10.479 "params": { 00:29:10.479 "name": "Nvme2", 00:29:10.479 "trtype": "tcp", 00:29:10.479 "traddr": "10.0.0.2", 00:29:10.479 "adrfam": "ipv4", 00:29:10.479 "trsvcid": "4420", 00:29:10.479 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:10.479 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:10.479 "hdgst": false, 00:29:10.479 "ddgst": false 00:29:10.479 }, 00:29:10.479 "method": "bdev_nvme_attach_controller" 00:29:10.479 },{ 00:29:10.479 "params": { 00:29:10.479 "name": "Nvme3", 00:29:10.479 "trtype": "tcp", 00:29:10.479 "traddr": "10.0.0.2", 00:29:10.479 "adrfam": "ipv4", 00:29:10.479 "trsvcid": "4420", 00:29:10.479 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:10.479 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:10.479 "hdgst": false, 00:29:10.479 "ddgst": false 00:29:10.479 }, 00:29:10.479 "method": "bdev_nvme_attach_controller" 00:29:10.479 },{ 00:29:10.479 "params": { 00:29:10.479 "name": "Nvme4", 00:29:10.479 "trtype": "tcp", 00:29:10.479 "traddr": "10.0.0.2", 00:29:10.479 "adrfam": "ipv4", 00:29:10.479 "trsvcid": "4420", 00:29:10.479 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:10.479 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:10.479 "hdgst": false, 00:29:10.479 "ddgst": false 00:29:10.479 }, 00:29:10.479 "method": "bdev_nvme_attach_controller" 00:29:10.479 },{ 00:29:10.479 "params": { 00:29:10.479 "name": "Nvme5", 00:29:10.479 "trtype": "tcp", 00:29:10.479 "traddr": "10.0.0.2", 00:29:10.479 "adrfam": "ipv4", 00:29:10.479 "trsvcid": "4420", 00:29:10.479 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:10.479 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:10.479 "hdgst": false, 00:29:10.479 "ddgst": false 00:29:10.479 }, 00:29:10.479 "method": "bdev_nvme_attach_controller" 00:29:10.479 },{ 00:29:10.479 "params": { 00:29:10.479 "name": "Nvme6", 00:29:10.479 "trtype": "tcp", 00:29:10.479 "traddr": "10.0.0.2", 00:29:10.479 "adrfam": "ipv4", 00:29:10.479 "trsvcid": "4420", 00:29:10.479 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:10.479 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:10.479 "hdgst": false, 00:29:10.479 "ddgst": false 00:29:10.479 }, 00:29:10.479 "method": "bdev_nvme_attach_controller" 00:29:10.479 },{ 00:29:10.479 "params": { 00:29:10.479 "name": "Nvme7", 00:29:10.479 "trtype": "tcp", 00:29:10.479 "traddr": "10.0.0.2", 00:29:10.479 "adrfam": "ipv4", 00:29:10.479 "trsvcid": "4420", 00:29:10.479 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:10.479 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:10.479 "hdgst": false, 00:29:10.479 "ddgst": false 00:29:10.479 }, 00:29:10.479 "method": "bdev_nvme_attach_controller" 00:29:10.479 },{ 00:29:10.479 "params": { 00:29:10.479 "name": "Nvme8", 00:29:10.479 "trtype": "tcp", 00:29:10.479 "traddr": "10.0.0.2", 00:29:10.479 "adrfam": "ipv4", 00:29:10.479 "trsvcid": "4420", 00:29:10.479 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:10.479 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:10.479 "hdgst": false, 00:29:10.479 "ddgst": false 00:29:10.479 }, 00:29:10.479 "method": "bdev_nvme_attach_controller" 00:29:10.479 },{ 00:29:10.479 "params": { 00:29:10.479 "name": "Nvme9", 00:29:10.479 "trtype": "tcp", 00:29:10.479 "traddr": "10.0.0.2", 00:29:10.479 "adrfam": "ipv4", 00:29:10.479 "trsvcid": "4420", 00:29:10.479 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:10.479 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:10.479 "hdgst": false, 00:29:10.479 "ddgst": false 00:29:10.479 }, 00:29:10.479 "method": "bdev_nvme_attach_controller" 00:29:10.479 },{ 00:29:10.479 "params": { 00:29:10.479 "name": "Nvme10", 00:29:10.479 "trtype": "tcp", 00:29:10.479 "traddr": "10.0.0.2", 00:29:10.479 "adrfam": "ipv4", 00:29:10.479 "trsvcid": "4420", 00:29:10.479 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:10.479 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:10.479 "hdgst": false, 00:29:10.479 "ddgst": false 00:29:10.479 }, 00:29:10.479 "method": "bdev_nvme_attach_controller" 00:29:10.479 }' 00:29:10.479 [2024-11-19 21:18:44.155332] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:29:10.479 [2024-11-19 21:18:44.155517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3083243 ] 00:29:10.737 [2024-11-19 21:18:44.298671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.737 [2024-11-19 21:18:44.427836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.638 Running I/O for 1 seconds... 00:29:13.831 1472.00 IOPS, 92.00 MiB/s 00:29:13.831 Latency(us) 00:29:13.831 [2024-11-19T20:18:47.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.831 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:13.831 Verification LBA range: start 0x0 length 0x400 00:29:13.831 Nvme1n1 : 1.04 184.80 11.55 0.00 0.00 342085.78 22330.79 315349.52 00:29:13.831 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:13.831 Verification LBA range: start 0x0 length 0x400 00:29:13.831 Nvme2n1 : 1.22 210.26 13.14 0.00 0.00 296383.15 22330.79 301368.51 00:29:13.831 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:13.831 Verification LBA range: start 0x0 length 0x400 00:29:13.831 Nvme3n1 : 1.19 214.30 13.39 0.00 0.00 285188.74 21845.33 295154.73 00:29:13.831 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:13.831 Verification LBA range: start 0x0 length 0x400 00:29:13.831 Nvme4n1 : 1.20 213.16 13.32 0.00 0.00 282322.11 32816.55 288940.94 00:29:13.831 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:13.831 Verification LBA range: start 0x0 length 0x400 00:29:13.831 Nvme5n1 : 1.12 171.53 10.72 0.00 0.00 342636.22 23204.60 299815.06 00:29:13.831 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:13.831 Verification LBA range: start 0x0 length 0x400 00:29:13.831 Nvme6n1 : 1.13 173.78 10.86 0.00 0.00 330310.68 6359.42 301368.51 00:29:13.831 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:13.831 Verification LBA range: start 0x0 length 0x400 00:29:13.831 Nvme7n1 : 1.23 208.41 13.03 0.00 0.00 274136.94 22427.88 299815.06 00:29:13.831 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:13.831 Verification LBA range: start 0x0 length 0x400 00:29:13.831 Nvme8n1 : 1.22 209.60 13.10 0.00 0.00 267603.25 22719.15 298261.62 00:29:13.831 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:13.831 Verification LBA range: start 0x0 length 0x400 00:29:13.831 Nvme9n1 : 1.24 207.02 12.94 0.00 0.00 266613.19 24078.41 306028.85 00:29:13.831 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:13.831 Verification LBA range: start 0x0 length 0x400 00:29:13.831 Nvme10n1 : 1.24 206.12 12.88 0.00 0.00 262632.11 20388.98 335544.32 00:29:13.831 [2024-11-19T20:18:47.626Z] =================================================================================================================== 00:29:13.831 [2024-11-19T20:18:47.626Z] Total : 1998.97 124.94 0.00 0.00 291557.93 6359.42 335544.32 00:29:14.765 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:14.765 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:14.765 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:14.765 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:14.765 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:14.765 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:14.765 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:14.765 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:14.765 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:14.765 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:14.765 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:14.765 rmmod nvme_tcp 00:29:14.765 rmmod nvme_fabrics 00:29:15.024 rmmod nvme_keyring 00:29:15.024 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:15.024 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:15.024 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:15.024 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3082500 ']' 00:29:15.024 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3082500 00:29:15.024 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3082500 ']' 00:29:15.024 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3082500 00:29:15.024 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:29:15.024 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:15.024 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3082500 00:29:15.024 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:15.024 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:15.024 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3082500' 00:29:15.024 killing process with pid 3082500 00:29:15.025 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3082500 00:29:15.025 21:18:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3082500 00:29:17.556 21:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:17.556 21:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:17.556 21:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:17.556 21:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:17.556 21:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:29:17.556 21:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:17.556 21:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:29:17.815 21:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:17.815 21:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:17.815 21:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.815 21:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.815 21:18:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:19.714 00:29:19.714 real 0m17.581s 00:29:19.714 user 0m57.473s 00:29:19.714 sys 0m3.842s 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:19.714 ************************************ 00:29:19.714 END TEST nvmf_shutdown_tc1 00:29:19.714 ************************************ 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:19.714 ************************************ 00:29:19.714 START TEST nvmf_shutdown_tc2 00:29:19.714 ************************************ 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:19.714 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:19.715 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:19.715 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:19.715 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:19.715 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:19.715 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:19.974 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:19.974 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:19.974 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:19.974 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:19.974 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:19.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:29:19.975 00:29:19.975 --- 10.0.0.2 ping statistics --- 00:29:19.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.975 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:19.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:29:19.975 00:29:19.975 --- 10.0.0.1 ping statistics --- 00:29:19.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.975 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3084521 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3084521 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3084521 ']' 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.975 21:18:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.975 [2024-11-19 21:18:53.742627] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:29:19.975 [2024-11-19 21:18:53.742784] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.233 [2024-11-19 21:18:53.896496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:20.490 [2024-11-19 21:18:54.039817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.490 [2024-11-19 21:18:54.039892] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.490 [2024-11-19 21:18:54.039919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:20.490 [2024-11-19 21:18:54.039943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:20.490 [2024-11-19 21:18:54.039962] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.490 [2024-11-19 21:18:54.042845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:20.490 [2024-11-19 21:18:54.042959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:20.490 [2024-11-19 21:18:54.043003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.490 [2024-11-19 21:18:54.043009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.057 [2024-11-19 21:18:54.719256] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.057 21:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.057 Malloc1 00:29:21.318 [2024-11-19 21:18:54.865868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:21.318 Malloc2 00:29:21.318 Malloc3 00:29:21.610 Malloc4 00:29:21.610 Malloc5 00:29:21.610 Malloc6 00:29:21.916 Malloc7 00:29:21.916 Malloc8 00:29:21.916 Malloc9 00:29:22.197 Malloc10 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3084763 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3084763 /var/tmp/bdevperf.sock 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3084763 ']' 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:22.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.197 { 00:29:22.197 "params": { 00:29:22.197 "name": "Nvme$subsystem", 00:29:22.197 "trtype": "$TEST_TRANSPORT", 00:29:22.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.197 "adrfam": "ipv4", 00:29:22.197 "trsvcid": "$NVMF_PORT", 00:29:22.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.197 "hdgst": ${hdgst:-false}, 00:29:22.197 "ddgst": ${ddgst:-false} 00:29:22.197 }, 00:29:22.197 "method": "bdev_nvme_attach_controller" 00:29:22.197 } 00:29:22.197 EOF 00:29:22.197 )") 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.197 { 00:29:22.197 "params": { 00:29:22.197 "name": "Nvme$subsystem", 00:29:22.197 "trtype": "$TEST_TRANSPORT", 00:29:22.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.197 "adrfam": "ipv4", 00:29:22.197 "trsvcid": "$NVMF_PORT", 00:29:22.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.197 "hdgst": ${hdgst:-false}, 00:29:22.197 "ddgst": ${ddgst:-false} 00:29:22.197 }, 00:29:22.197 "method": "bdev_nvme_attach_controller" 00:29:22.197 } 00:29:22.197 EOF 00:29:22.197 )") 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.197 { 00:29:22.197 "params": { 00:29:22.197 "name": "Nvme$subsystem", 00:29:22.197 "trtype": "$TEST_TRANSPORT", 00:29:22.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.197 "adrfam": "ipv4", 00:29:22.197 "trsvcid": "$NVMF_PORT", 00:29:22.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.197 "hdgst": ${hdgst:-false}, 00:29:22.197 "ddgst": ${ddgst:-false} 00:29:22.197 }, 00:29:22.197 "method": "bdev_nvme_attach_controller" 00:29:22.197 } 00:29:22.197 EOF 00:29:22.197 )") 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.197 { 00:29:22.197 "params": { 00:29:22.197 "name": "Nvme$subsystem", 00:29:22.197 "trtype": "$TEST_TRANSPORT", 00:29:22.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.197 "adrfam": "ipv4", 00:29:22.197 "trsvcid": "$NVMF_PORT", 00:29:22.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.197 "hdgst": ${hdgst:-false}, 00:29:22.197 "ddgst": ${ddgst:-false} 00:29:22.197 }, 00:29:22.197 "method": "bdev_nvme_attach_controller" 00:29:22.197 } 00:29:22.197 EOF 00:29:22.197 )") 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.197 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.197 { 00:29:22.197 "params": { 00:29:22.197 "name": "Nvme$subsystem", 00:29:22.197 "trtype": "$TEST_TRANSPORT", 00:29:22.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.197 "adrfam": "ipv4", 00:29:22.197 "trsvcid": "$NVMF_PORT", 00:29:22.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.197 "hdgst": ${hdgst:-false}, 00:29:22.197 "ddgst": ${ddgst:-false} 00:29:22.197 }, 00:29:22.197 "method": "bdev_nvme_attach_controller" 00:29:22.197 } 00:29:22.197 EOF 00:29:22.197 )") 00:29:22.198 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:22.198 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.198 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.198 { 00:29:22.198 "params": { 00:29:22.198 "name": "Nvme$subsystem", 00:29:22.198 "trtype": "$TEST_TRANSPORT", 00:29:22.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.198 "adrfam": "ipv4", 00:29:22.198 "trsvcid": "$NVMF_PORT", 00:29:22.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.198 "hdgst": ${hdgst:-false}, 00:29:22.198 "ddgst": ${ddgst:-false} 00:29:22.198 }, 00:29:22.198 "method": "bdev_nvme_attach_controller" 00:29:22.198 } 00:29:22.198 EOF 00:29:22.198 )") 00:29:22.198 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:22.198 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.198 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.198 { 00:29:22.198 "params": { 00:29:22.198 "name": "Nvme$subsystem", 00:29:22.198 "trtype": "$TEST_TRANSPORT", 00:29:22.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.198 "adrfam": "ipv4", 00:29:22.198 "trsvcid": "$NVMF_PORT", 00:29:22.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.198 "hdgst": ${hdgst:-false}, 00:29:22.198 "ddgst": ${ddgst:-false} 00:29:22.198 }, 00:29:22.198 "method": "bdev_nvme_attach_controller" 00:29:22.198 } 00:29:22.198 EOF 00:29:22.198 )") 00:29:22.198 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:22.198 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.198 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.198 { 00:29:22.198 "params": { 00:29:22.198 "name": "Nvme$subsystem", 00:29:22.198 "trtype": "$TEST_TRANSPORT", 00:29:22.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.198 "adrfam": "ipv4", 00:29:22.198 "trsvcid": "$NVMF_PORT", 00:29:22.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.198 "hdgst": ${hdgst:-false}, 00:29:22.198 "ddgst": ${ddgst:-false} 00:29:22.198 }, 00:29:22.198 "method": "bdev_nvme_attach_controller" 00:29:22.198 } 00:29:22.198 EOF 00:29:22.198 )") 00:29:22.198 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:22.198 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.198 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.198 { 00:29:22.198 "params": { 00:29:22.198 "name": "Nvme$subsystem", 00:29:22.198 "trtype": "$TEST_TRANSPORT", 00:29:22.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.198 "adrfam": "ipv4", 00:29:22.198 "trsvcid": "$NVMF_PORT", 00:29:22.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.198 "hdgst": ${hdgst:-false}, 00:29:22.198 "ddgst": ${ddgst:-false} 00:29:22.198 }, 00:29:22.198 "method": "bdev_nvme_attach_controller" 00:29:22.198 } 00:29:22.198 EOF 00:29:22.198 )") 00:29:22.198 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:22.198 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.198 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.198 { 00:29:22.198 "params": { 00:29:22.198 "name": "Nvme$subsystem", 00:29:22.198 "trtype": "$TEST_TRANSPORT", 00:29:22.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.198 "adrfam": "ipv4", 00:29:22.198 "trsvcid": "$NVMF_PORT", 00:29:22.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.198 "hdgst": ${hdgst:-false}, 00:29:22.198 "ddgst": ${ddgst:-false} 00:29:22.198 }, 00:29:22.198 "method": "bdev_nvme_attach_controller" 00:29:22.198 } 00:29:22.198 EOF 00:29:22.198 )") 00:29:22.198 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:22.198 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:29:22.198 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:29:22.198 21:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:22.198 "params": { 00:29:22.198 "name": "Nvme1", 00:29:22.198 "trtype": "tcp", 00:29:22.198 "traddr": "10.0.0.2", 00:29:22.198 "adrfam": "ipv4", 00:29:22.198 "trsvcid": "4420", 00:29:22.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:22.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:22.198 "hdgst": false, 00:29:22.198 "ddgst": false 00:29:22.198 }, 00:29:22.198 "method": "bdev_nvme_attach_controller" 00:29:22.198 },{ 00:29:22.198 "params": { 00:29:22.198 "name": "Nvme2", 00:29:22.198 "trtype": "tcp", 00:29:22.198 "traddr": "10.0.0.2", 00:29:22.198 "adrfam": "ipv4", 00:29:22.198 "trsvcid": "4420", 00:29:22.198 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:22.198 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:22.198 "hdgst": false, 00:29:22.198 "ddgst": false 00:29:22.198 }, 00:29:22.198 "method": "bdev_nvme_attach_controller" 00:29:22.198 },{ 00:29:22.198 "params": { 00:29:22.198 "name": "Nvme3", 00:29:22.198 "trtype": "tcp", 00:29:22.198 "traddr": "10.0.0.2", 00:29:22.198 "adrfam": "ipv4", 00:29:22.198 "trsvcid": "4420", 00:29:22.198 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:22.198 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:22.198 "hdgst": false, 00:29:22.198 "ddgst": false 00:29:22.198 }, 00:29:22.198 "method": "bdev_nvme_attach_controller" 00:29:22.198 },{ 00:29:22.198 "params": { 00:29:22.198 "name": "Nvme4", 00:29:22.198 "trtype": "tcp", 00:29:22.198 "traddr": "10.0.0.2", 00:29:22.198 "adrfam": "ipv4", 00:29:22.198 "trsvcid": "4420", 00:29:22.198 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:22.198 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:22.198 "hdgst": false, 00:29:22.198 "ddgst": false 00:29:22.198 }, 00:29:22.198 "method": "bdev_nvme_attach_controller" 00:29:22.198 },{ 00:29:22.198 "params": { 00:29:22.198 "name": "Nvme5", 00:29:22.198 "trtype": "tcp", 00:29:22.198 "traddr": "10.0.0.2", 00:29:22.198 "adrfam": "ipv4", 00:29:22.198 "trsvcid": "4420", 00:29:22.198 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:22.198 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:22.198 "hdgst": false, 00:29:22.198 "ddgst": false 00:29:22.198 }, 00:29:22.198 "method": "bdev_nvme_attach_controller" 00:29:22.198 },{ 00:29:22.198 "params": { 00:29:22.198 "name": "Nvme6", 00:29:22.198 "trtype": "tcp", 00:29:22.198 "traddr": "10.0.0.2", 00:29:22.198 "adrfam": "ipv4", 00:29:22.198 "trsvcid": "4420", 00:29:22.198 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:22.198 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:22.198 "hdgst": false, 00:29:22.198 "ddgst": false 00:29:22.198 }, 00:29:22.198 "method": "bdev_nvme_attach_controller" 00:29:22.198 },{ 00:29:22.198 "params": { 00:29:22.198 "name": "Nvme7", 00:29:22.198 "trtype": "tcp", 00:29:22.198 "traddr": "10.0.0.2", 00:29:22.198 "adrfam": "ipv4", 00:29:22.198 "trsvcid": "4420", 00:29:22.198 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:22.198 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:22.198 "hdgst": false, 00:29:22.198 "ddgst": false 00:29:22.198 }, 00:29:22.198 "method": "bdev_nvme_attach_controller" 00:29:22.198 },{ 00:29:22.198 "params": { 00:29:22.198 "name": "Nvme8", 00:29:22.198 "trtype": "tcp", 00:29:22.198 "traddr": "10.0.0.2", 00:29:22.198 "adrfam": "ipv4", 00:29:22.198 "trsvcid": "4420", 00:29:22.198 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:22.198 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:22.198 "hdgst": false, 00:29:22.198 "ddgst": false 00:29:22.198 }, 00:29:22.198 "method": "bdev_nvme_attach_controller" 00:29:22.198 },{ 00:29:22.198 "params": { 00:29:22.198 "name": "Nvme9", 00:29:22.198 "trtype": "tcp", 00:29:22.198 "traddr": "10.0.0.2", 00:29:22.198 "adrfam": "ipv4", 00:29:22.198 "trsvcid": "4420", 00:29:22.198 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:22.198 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:22.198 "hdgst": false, 00:29:22.198 "ddgst": false 00:29:22.198 }, 00:29:22.198 "method": "bdev_nvme_attach_controller" 00:29:22.198 },{ 00:29:22.198 "params": { 00:29:22.198 "name": "Nvme10", 00:29:22.198 "trtype": "tcp", 00:29:22.198 "traddr": "10.0.0.2", 00:29:22.198 "adrfam": "ipv4", 00:29:22.198 "trsvcid": "4420", 00:29:22.198 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:22.198 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:22.199 "hdgst": false, 00:29:22.199 "ddgst": false 00:29:22.199 }, 00:29:22.199 "method": "bdev_nvme_attach_controller" 00:29:22.199 }' 00:29:22.199 [2024-11-19 21:18:55.890230] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:29:22.199 [2024-11-19 21:18:55.890367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3084763 ] 00:29:22.457 [2024-11-19 21:18:56.030140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.457 [2024-11-19 21:18:56.158622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.984 Running I/O for 10 seconds... 00:29:24.984 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:24.984 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:24.984 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:24.984 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.984 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.984 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.984 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:24.984 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:24.984 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:24.984 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:24.984 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:24.984 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:24.984 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:24.984 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:24.984 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.984 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:24.984 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.984 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.984 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:24.984 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:24.984 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3084763 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3084763 ']' 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3084763 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3084763 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3084763' 00:29:25.243 killing process with pid 3084763 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3084763 00:29:25.243 21:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3084763 00:29:25.504 Received shutdown signal, test time was about 0.876253 seconds 00:29:25.504 00:29:25.504 Latency(us) 00:29:25.504 [2024-11-19T20:18:59.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.504 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:25.504 Verification LBA range: start 0x0 length 0x400 00:29:25.504 Nvme1n1 : 0.86 223.67 13.98 0.00 0.00 281112.27 24175.50 276513.37 00:29:25.504 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:25.504 Verification LBA range: start 0x0 length 0x400 00:29:25.504 Nvme2n1 : 0.87 221.14 13.82 0.00 0.00 278544.69 20291.89 301368.51 00:29:25.504 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:25.504 Verification LBA range: start 0x0 length 0x400 00:29:25.504 Nvme3n1 : 0.88 219.34 13.71 0.00 0.00 274536.93 22427.88 301368.51 00:29:25.504 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:25.504 Verification LBA range: start 0x0 length 0x400 00:29:25.504 Nvme4n1 : 0.86 222.03 13.88 0.00 0.00 263981.64 24660.95 330883.98 00:29:25.504 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:25.504 Verification LBA range: start 0x0 length 0x400 00:29:25.504 Nvme5n1 : 0.80 159.16 9.95 0.00 0.00 356656.17 38836.15 281173.71 00:29:25.504 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:25.504 Verification LBA range: start 0x0 length 0x400 00:29:25.505 Nvme6n1 : 0.81 157.88 9.87 0.00 0.00 350114.32 24563.86 285834.05 00:29:25.505 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:25.505 Verification LBA range: start 0x0 length 0x400 00:29:25.505 Nvme7n1 : 0.85 226.04 14.13 0.00 0.00 239152.29 21359.88 279620.27 00:29:25.505 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:25.505 Verification LBA range: start 0x0 length 0x400 00:29:25.505 Nvme8n1 : 0.85 231.73 14.48 0.00 0.00 225747.62 5048.70 290494.39 00:29:25.505 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:25.505 Verification LBA range: start 0x0 length 0x400 00:29:25.505 Nvme9n1 : 0.83 153.94 9.62 0.00 0.00 330888.53 23010.42 335544.32 00:29:25.505 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:25.505 Verification LBA range: start 0x0 length 0x400 00:29:25.505 Nvme10n1 : 0.82 155.58 9.72 0.00 0.00 316909.80 29709.65 309135.74 00:29:25.505 [2024-11-19T20:18:59.300Z] =================================================================================================================== 00:29:25.505 [2024-11-19T20:18:59.300Z] Total : 1970.51 123.16 0.00 0.00 284411.44 5048.70 335544.32 00:29:26.440 21:19:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3084521 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:27.373 rmmod nvme_tcp 00:29:27.373 rmmod nvme_fabrics 00:29:27.373 rmmod nvme_keyring 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3084521 ']' 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3084521 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3084521 ']' 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3084521 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3084521 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3084521' 00:29:27.373 killing process with pid 3084521 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3084521 00:29:27.373 21:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3084521 00:29:30.656 21:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:30.656 21:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:30.656 21:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:30.656 21:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:30.656 21:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:30.656 21:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:30.656 21:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:30.656 21:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:30.656 21:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:30.656 21:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.656 21:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.656 21:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:32.555 00:29:32.555 real 0m12.479s 00:29:32.555 user 0m42.071s 00:29:32.555 sys 0m1.896s 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.555 ************************************ 00:29:32.555 END TEST nvmf_shutdown_tc2 00:29:32.555 ************************************ 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:32.555 ************************************ 00:29:32.555 START TEST nvmf_shutdown_tc3 00:29:32.555 ************************************ 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:32.555 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:32.556 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:32.556 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:32.556 21:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:32.556 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:32.556 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:32.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:32.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:29:32.556 00:29:32.556 --- 10.0.0.2 ping statistics --- 00:29:32.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.556 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:29:32.556 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:32.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:32.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:29:32.557 00:29:32.557 --- 10.0.0.1 ping statistics --- 00:29:32.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.557 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3086141 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3086141 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3086141 ']' 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.557 21:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:32.557 [2024-11-19 21:19:06.293311] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:29:32.557 [2024-11-19 21:19:06.293477] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.815 [2024-11-19 21:19:06.446882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:32.815 [2024-11-19 21:19:06.588451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:32.815 [2024-11-19 21:19:06.588535] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:32.815 [2024-11-19 21:19:06.588569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:32.815 [2024-11-19 21:19:06.588592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:32.815 [2024-11-19 21:19:06.588612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:32.815 [2024-11-19 21:19:06.591409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:32.815 [2024-11-19 21:19:06.591532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:32.815 [2024-11-19 21:19:06.591573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.815 [2024-11-19 21:19:06.591580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:33.750 [2024-11-19 21:19:07.322630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.750 21:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:33.750 Malloc1 00:29:33.750 [2024-11-19 21:19:07.477891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.007 Malloc2 00:29:34.007 Malloc3 00:29:34.007 Malloc4 00:29:34.265 Malloc5 00:29:34.266 Malloc6 00:29:34.523 Malloc7 00:29:34.523 Malloc8 00:29:34.523 Malloc9 00:29:34.782 Malloc10 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3086444 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3086444 /var/tmp/bdevperf.sock 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3086444 ']' 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:34.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.782 { 00:29:34.782 "params": { 00:29:34.782 "name": "Nvme$subsystem", 00:29:34.782 "trtype": "$TEST_TRANSPORT", 00:29:34.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.782 "adrfam": "ipv4", 00:29:34.782 "trsvcid": "$NVMF_PORT", 00:29:34.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.782 "hdgst": ${hdgst:-false}, 00:29:34.782 "ddgst": ${ddgst:-false} 00:29:34.782 }, 00:29:34.782 "method": "bdev_nvme_attach_controller" 00:29:34.782 } 00:29:34.782 EOF 00:29:34.782 )") 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.782 { 00:29:34.782 "params": { 00:29:34.782 "name": "Nvme$subsystem", 00:29:34.782 "trtype": "$TEST_TRANSPORT", 00:29:34.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.782 "adrfam": "ipv4", 00:29:34.782 "trsvcid": "$NVMF_PORT", 00:29:34.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.782 "hdgst": ${hdgst:-false}, 00:29:34.782 "ddgst": ${ddgst:-false} 00:29:34.782 }, 00:29:34.782 "method": "bdev_nvme_attach_controller" 00:29:34.782 } 00:29:34.782 EOF 00:29:34.782 )") 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.782 { 00:29:34.782 "params": { 00:29:34.782 "name": "Nvme$subsystem", 00:29:34.782 "trtype": "$TEST_TRANSPORT", 00:29:34.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.782 "adrfam": "ipv4", 00:29:34.782 "trsvcid": "$NVMF_PORT", 00:29:34.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.782 "hdgst": ${hdgst:-false}, 00:29:34.782 "ddgst": ${ddgst:-false} 00:29:34.782 }, 00:29:34.782 "method": "bdev_nvme_attach_controller" 00:29:34.782 } 00:29:34.782 EOF 00:29:34.782 )") 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.782 { 00:29:34.782 "params": { 00:29:34.782 "name": "Nvme$subsystem", 00:29:34.782 "trtype": "$TEST_TRANSPORT", 00:29:34.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.782 "adrfam": "ipv4", 00:29:34.782 "trsvcid": "$NVMF_PORT", 00:29:34.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.782 "hdgst": ${hdgst:-false}, 00:29:34.782 "ddgst": ${ddgst:-false} 00:29:34.782 }, 00:29:34.782 "method": "bdev_nvme_attach_controller" 00:29:34.782 } 00:29:34.782 EOF 00:29:34.782 )") 00:29:34.782 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:34.783 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.783 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.783 { 00:29:34.783 "params": { 00:29:34.783 "name": "Nvme$subsystem", 00:29:34.783 "trtype": "$TEST_TRANSPORT", 00:29:34.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.783 "adrfam": "ipv4", 00:29:34.783 "trsvcid": "$NVMF_PORT", 00:29:34.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.783 "hdgst": ${hdgst:-false}, 00:29:34.783 "ddgst": ${ddgst:-false} 00:29:34.783 }, 00:29:34.783 "method": "bdev_nvme_attach_controller" 00:29:34.783 } 00:29:34.783 EOF 00:29:34.783 )") 00:29:34.783 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:34.783 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.783 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.783 { 00:29:34.783 "params": { 00:29:34.783 "name": "Nvme$subsystem", 00:29:34.783 "trtype": "$TEST_TRANSPORT", 00:29:34.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.783 "adrfam": "ipv4", 00:29:34.783 "trsvcid": "$NVMF_PORT", 00:29:34.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.783 "hdgst": ${hdgst:-false}, 00:29:34.783 "ddgst": ${ddgst:-false} 00:29:34.783 }, 00:29:34.783 "method": "bdev_nvme_attach_controller" 00:29:34.783 } 00:29:34.783 EOF 00:29:34.783 )") 00:29:34.783 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:34.783 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.783 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.783 { 00:29:34.783 "params": { 00:29:34.783 "name": "Nvme$subsystem", 00:29:34.783 "trtype": "$TEST_TRANSPORT", 00:29:34.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.783 "adrfam": "ipv4", 00:29:34.783 "trsvcid": "$NVMF_PORT", 00:29:34.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.783 "hdgst": ${hdgst:-false}, 00:29:34.783 "ddgst": ${ddgst:-false} 00:29:34.783 }, 00:29:34.783 "method": "bdev_nvme_attach_controller" 00:29:34.783 } 00:29:34.783 EOF 00:29:34.783 )") 00:29:34.783 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:34.783 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.783 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.783 { 00:29:34.783 "params": { 00:29:34.783 "name": "Nvme$subsystem", 00:29:34.783 "trtype": "$TEST_TRANSPORT", 00:29:34.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.783 "adrfam": "ipv4", 00:29:34.783 "trsvcid": "$NVMF_PORT", 00:29:34.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.783 "hdgst": ${hdgst:-false}, 00:29:34.783 "ddgst": ${ddgst:-false} 00:29:34.783 }, 00:29:34.783 "method": "bdev_nvme_attach_controller" 00:29:34.783 } 00:29:34.783 EOF 00:29:34.783 )") 00:29:34.783 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:34.783 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.783 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.783 { 00:29:34.783 "params": { 00:29:34.783 "name": "Nvme$subsystem", 00:29:34.783 "trtype": "$TEST_TRANSPORT", 00:29:34.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.783 "adrfam": "ipv4", 00:29:34.783 "trsvcid": "$NVMF_PORT", 00:29:34.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.783 "hdgst": ${hdgst:-false}, 00:29:34.783 "ddgst": ${ddgst:-false} 00:29:34.783 }, 00:29:34.783 "method": "bdev_nvme_attach_controller" 00:29:34.783 } 00:29:34.783 EOF 00:29:34.783 )") 00:29:34.783 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:34.783 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.783 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.783 { 00:29:34.783 "params": { 00:29:34.783 "name": "Nvme$subsystem", 00:29:34.783 "trtype": "$TEST_TRANSPORT", 00:29:34.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.783 "adrfam": "ipv4", 00:29:34.783 "trsvcid": "$NVMF_PORT", 00:29:34.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.783 "hdgst": ${hdgst:-false}, 00:29:34.783 "ddgst": ${ddgst:-false} 00:29:34.783 }, 00:29:34.783 "method": "bdev_nvme_attach_controller" 00:29:34.783 } 00:29:34.783 EOF 00:29:34.783 )") 00:29:34.783 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:34.783 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:34.783 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:34.783 21:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:34.783 "params": { 00:29:34.783 "name": "Nvme1", 00:29:34.783 "trtype": "tcp", 00:29:34.783 "traddr": "10.0.0.2", 00:29:34.783 "adrfam": "ipv4", 00:29:34.783 "trsvcid": "4420", 00:29:34.783 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.783 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:34.783 "hdgst": false, 00:29:34.783 "ddgst": false 00:29:34.783 }, 00:29:34.783 "method": "bdev_nvme_attach_controller" 00:29:34.783 },{ 00:29:34.783 "params": { 00:29:34.783 "name": "Nvme2", 00:29:34.783 "trtype": "tcp", 00:29:34.783 "traddr": "10.0.0.2", 00:29:34.783 "adrfam": "ipv4", 00:29:34.783 "trsvcid": "4420", 00:29:34.783 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:34.783 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:34.783 "hdgst": false, 00:29:34.783 "ddgst": false 00:29:34.783 }, 00:29:34.783 "method": "bdev_nvme_attach_controller" 00:29:34.783 },{ 00:29:34.783 "params": { 00:29:34.783 "name": "Nvme3", 00:29:34.783 "trtype": "tcp", 00:29:34.783 "traddr": "10.0.0.2", 00:29:34.783 "adrfam": "ipv4", 00:29:34.783 "trsvcid": "4420", 00:29:34.783 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:34.783 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:34.783 "hdgst": false, 00:29:34.783 "ddgst": false 00:29:34.783 }, 00:29:34.783 "method": "bdev_nvme_attach_controller" 00:29:34.783 },{ 00:29:34.783 "params": { 00:29:34.783 "name": "Nvme4", 00:29:34.783 "trtype": "tcp", 00:29:34.783 "traddr": "10.0.0.2", 00:29:34.783 "adrfam": "ipv4", 00:29:34.783 "trsvcid": "4420", 00:29:34.783 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:34.783 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:34.783 "hdgst": false, 00:29:34.783 "ddgst": false 00:29:34.783 }, 00:29:34.783 "method": "bdev_nvme_attach_controller" 00:29:34.783 },{ 00:29:34.783 "params": { 00:29:34.783 "name": "Nvme5", 00:29:34.783 "trtype": "tcp", 00:29:34.783 "traddr": "10.0.0.2", 00:29:34.783 "adrfam": "ipv4", 00:29:34.783 "trsvcid": "4420", 00:29:34.783 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:34.783 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:34.783 "hdgst": false, 00:29:34.783 "ddgst": false 00:29:34.783 }, 00:29:34.783 "method": "bdev_nvme_attach_controller" 00:29:34.783 },{ 00:29:34.783 "params": { 00:29:34.783 "name": "Nvme6", 00:29:34.783 "trtype": "tcp", 00:29:34.783 "traddr": "10.0.0.2", 00:29:34.783 "adrfam": "ipv4", 00:29:34.783 "trsvcid": "4420", 00:29:34.783 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:34.783 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:34.783 "hdgst": false, 00:29:34.783 "ddgst": false 00:29:34.783 }, 00:29:34.784 "method": "bdev_nvme_attach_controller" 00:29:34.784 },{ 00:29:34.784 "params": { 00:29:34.784 "name": "Nvme7", 00:29:34.784 "trtype": "tcp", 00:29:34.784 "traddr": "10.0.0.2", 00:29:34.784 "adrfam": "ipv4", 00:29:34.784 "trsvcid": "4420", 00:29:34.784 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:34.784 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:34.784 "hdgst": false, 00:29:34.784 "ddgst": false 00:29:34.784 }, 00:29:34.784 "method": "bdev_nvme_attach_controller" 00:29:34.784 },{ 00:29:34.784 "params": { 00:29:34.784 "name": "Nvme8", 00:29:34.784 "trtype": "tcp", 00:29:34.784 "traddr": "10.0.0.2", 00:29:34.784 "adrfam": "ipv4", 00:29:34.784 "trsvcid": "4420", 00:29:34.784 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:34.784 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:34.784 "hdgst": false, 00:29:34.784 "ddgst": false 00:29:34.784 }, 00:29:34.784 "method": "bdev_nvme_attach_controller" 00:29:34.784 },{ 00:29:34.784 "params": { 00:29:34.784 "name": "Nvme9", 00:29:34.784 "trtype": "tcp", 00:29:34.784 "traddr": "10.0.0.2", 00:29:34.784 "adrfam": "ipv4", 00:29:34.784 "trsvcid": "4420", 00:29:34.784 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:34.784 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:34.784 "hdgst": false, 00:29:34.784 "ddgst": false 00:29:34.784 }, 00:29:34.784 "method": "bdev_nvme_attach_controller" 00:29:34.784 },{ 00:29:34.784 "params": { 00:29:34.784 "name": "Nvme10", 00:29:34.784 "trtype": "tcp", 00:29:34.784 "traddr": "10.0.0.2", 00:29:34.784 "adrfam": "ipv4", 00:29:34.784 "trsvcid": "4420", 00:29:34.784 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:34.784 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:34.784 "hdgst": false, 00:29:34.784 "ddgst": false 00:29:34.784 }, 00:29:34.784 "method": "bdev_nvme_attach_controller" 00:29:34.784 }' 00:29:34.784 [2024-11-19 21:19:08.499262] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:29:34.784 [2024-11-19 21:19:08.499446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3086444 ] 00:29:35.042 [2024-11-19 21:19:08.645438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.042 [2024-11-19 21:19:08.773142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.943 Running I/O for 10 seconds... 00:29:37.508 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:37.508 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:37.508 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:37.508 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.508 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:37.508 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.508 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:37.508 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:37.508 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:37.508 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:37.508 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:37.508 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:37.508 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:37.508 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:37.508 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:37.508 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:37.508 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.508 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:37.767 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.767 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:37.767 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:37.767 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3086141 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3086141 ']' 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3086141 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3086141 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3086141' 00:29:38.041 killing process with pid 3086141 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3086141 00:29:38.041 21:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3086141 00:29:38.041 [2024-11-19 21:19:11.670472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.670998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.041 [2024-11-19 21:19:11.671420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.671437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.671454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.671472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.671489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.671507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.671524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.671541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.671560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.671578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.675694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.675736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.675758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.675776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.675793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.675810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.675829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.675847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.675864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.675881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.675905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.675924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.675942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.675960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.675976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.675993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.676868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.679496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.679528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.679547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.679564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.679580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.679597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.679614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.679632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.042 [2024-11-19 21:19:11.679648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.679665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.679681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.679697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.679714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.679731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.679748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.679765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.679782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.679800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.679817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.679834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.679852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.679870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.679887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.679905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.679922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.679940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.679962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.679983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.680622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.684988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.685005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.685023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.685041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.685058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.043 [2024-11-19 21:19:11.685086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.685105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.685132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.685149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.685167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.685184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.685201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.685219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.685237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.685254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.685270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.685288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.685304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.685321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.685343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.685361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.686476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.044 [2024-11-19 21:19:11.686533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.686564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.044 [2024-11-19 21:19:11.686587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.686609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.044 [2024-11-19 21:19:11.686631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.686654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.044 [2024-11-19 21:19:11.686675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.686695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.686781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.044 [2024-11-19 21:19:11.686809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.686842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.044 [2024-11-19 21:19:11.686876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.686901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.044 [2024-11-19 21:19:11.686922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.686944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.044 [2024-11-19 21:19:11.686965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.686985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.687120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.044 [2024-11-19 21:19:11.687150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.687175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.044 [2024-11-19 21:19:11.687197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.687221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.044 [2024-11-19 21:19:11.687243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.687271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.044 [2024-11-19 21:19:11.687293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.687312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.687389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.044 [2024-11-19 21:19:11.687418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.687461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.044 [2024-11-19 21:19:11.687483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.687506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.044 [2024-11-19 21:19:11.687528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.687549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.044 [2024-11-19 21:19:11.687572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.687593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.687907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.044 [2024-11-19 21:19:11.687950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.687993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.044 [2024-11-19 21:19:11.688018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.688047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.044 [2024-11-19 21:19:11.688079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.688110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.044 [2024-11-19 21:19:11.688138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.688164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.044 [2024-11-19 21:19:11.688187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.688214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.044 [2024-11-19 21:19:11.688236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.688262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.044 [2024-11-19 21:19:11.688290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.688318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.044 [2024-11-19 21:19:11.688341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.688378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.044 [2024-11-19 21:19:11.688401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.688427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.044 [2024-11-19 21:19:11.688451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.688478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.044 [2024-11-19 21:19:11.688484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same [2024-11-19 21:19:11.688500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:38.044 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 [2024-11-19 21:19:11.688530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:1[2024-11-19 21:19:11.688529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.044 with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.688555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 21:19:11.688555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.044 with the state(6) to be set 00:29:38.044 [2024-11-19 21:19:11.688577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:38.045 [2024-11-19 21:19:11.688583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.688606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.688631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.688654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.688679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.688702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.688728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.688750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.688776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.688798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.688829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.688854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.688880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.688902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.688928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.688966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.688992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.689015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.689040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.689062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.689123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.689147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.689173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.689195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.689220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.689243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.689270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.689294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.689320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.689343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.689378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.689401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.689429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.689452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.689479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.689506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.689534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.689558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.689584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.689608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.689633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.689656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.689682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.689706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.689732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.689755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.689780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.689804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.689830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.689868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.689894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.689916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.689941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.689963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.689988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.690010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.690035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.690080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.690122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.690122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.045 [2024-11-19 21:19:11.690145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.045 [2024-11-19 21:19:11.690165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.045 [2024-11-19 21:19:11.690171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.045 [2024-11-19 21:19:11.690187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.046 [2024-11-19 21:19:11.690205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-11-19 21:19:11.690220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:1with the state(6) to be set 00:29:38.046 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.046 [2024-11-19 21:19:11.690243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.046 [2024-11-19 21:19:11.690262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.046 [2024-11-19 21:19:11.690281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.046 [2024-11-19 21:19:11.690299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.046 [2024-11-19 21:19:11.690335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.046 [2024-11-19 21:19:11.690354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.046 [2024-11-19 21:19:11.690413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.046 [2024-11-19 21:19:11.690430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:1[2024-11-19 21:19:11.690448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.046 with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.046 [2024-11-19 21:19:11.690487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.046 [2024-11-19 21:19:11.690505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.046 [2024-11-19 21:19:11.690523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.046 [2024-11-19 21:19:11.690558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 21:19:11.690576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.046 with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.046 [2024-11-19 21:19:11.690612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.046 [2024-11-19 21:19:11.690629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.046 [2024-11-19 21:19:11.690665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.046 [2024-11-19 21:19:11.690682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:1[2024-11-19 21:19:11.690700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.046 with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-11-19 21:19:11.690721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:38.046 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.046 [2024-11-19 21:19:11.690747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.046 [2024-11-19 21:19:11.690767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.046 [2024-11-19 21:19:11.690785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1[2024-11-19 21:19:11.690803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.046 with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.046 [2024-11-19 21:19:11.690839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.046 [2024-11-19 21:19:11.690856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.046 [2024-11-19 21:19:11.690874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.046 [2024-11-19 21:19:11.690908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.046 [2024-11-19 21:19:11.690925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.046 [2024-11-19 21:19:11.690959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.046 [2024-11-19 21:19:11.690977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.690991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.046 [2024-11-19 21:19:11.690998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.691013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.046 [2024-11-19 21:19:11.691016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.691033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.046 [2024-11-19 21:19:11.691038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.046 [2024-11-19 21:19:11.691050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.047 [2024-11-19 21:19:11.691060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.691067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.047 [2024-11-19 21:19:11.691125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.047 [2024-11-19 21:19:11.691122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.691143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.047 [2024-11-19 21:19:11.691161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.047 [2024-11-19 21:19:11.691161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.691178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.047 [2024-11-19 21:19:11.691189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.691195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.047 [2024-11-19 21:19:11.691212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-11-19 21:19:11.691212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:38.047 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.691231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.047 [2024-11-19 21:19:11.691239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.691249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.047 [2024-11-19 21:19:11.691262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.691267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.047 [2024-11-19 21:19:11.691284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.047 [2024-11-19 21:19:11.691287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.691300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.047 [2024-11-19 21:19:11.691314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.691318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.047 [2024-11-19 21:19:11.691336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:38.047 [2024-11-19 21:19:11.691922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.691958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.691991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.692015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.692042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.692065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.692114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.692137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.692162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.692185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.692210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.692233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.692258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.692281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.692306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.692330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.692378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.692401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.692426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.692448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.692474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.692502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.692528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.692551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.692577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.692599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.692624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.692647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.692671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.692694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.692718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.692741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.692767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.692790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.692815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.692837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.692862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.692884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.692910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.692932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.692956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.692979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.693005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.693028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.693053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.693098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.693133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.693163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.693190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.693214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.693240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.693262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.693288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.693312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.047 [2024-11-19 21:19:11.693337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.047 [2024-11-19 21:19:11.693360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 [2024-11-19 21:19:11.693401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 [2024-11-19 21:19:11.693424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 [2024-11-19 21:19:11.693450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 [2024-11-19 21:19:11.693471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 [2024-11-19 21:19:11.693496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 [2024-11-19 21:19:11.693518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 [2024-11-19 21:19:11.693543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 [2024-11-19 21:19:11.693565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 [2024-11-19 21:19:11.693590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 [2024-11-19 21:19:11.693612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 [2024-11-19 21:19:11.693637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 [2024-11-19 21:19:11.693661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 [2024-11-19 21:19:11.693686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 [2024-11-19 21:19:11.693707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 [2024-11-19 21:19:11.693733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 [2024-11-19 21:19:11.693759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 [2024-11-19 21:19:11.693784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 [2024-11-19 21:19:11.693797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same [2024-11-19 21:19:11.693806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:38.048 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 [2024-11-19 21:19:11.693833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.693837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 [2024-11-19 21:19:11.693852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.693859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 [2024-11-19 21:19:11.693870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.693884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1[2024-11-19 21:19:11.693888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.693907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.693908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 [2024-11-19 21:19:11.693925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.693933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 [2024-11-19 21:19:11.693943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.693961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.693962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 [2024-11-19 21:19:11.693977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.693987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 [2024-11-19 21:19:11.693995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 21:19:11.694012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 [2024-11-19 21:19:11.694063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 21:19:11.694097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 [2024-11-19 21:19:11.694138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 [2024-11-19 21:19:11.694157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 [2024-11-19 21:19:11.694194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 [2024-11-19 21:19:11.694212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 [2024-11-19 21:19:11.694249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 [2024-11-19 21:19:11.694267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:1[2024-11-19 21:19:11.694285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 [2024-11-19 21:19:11.694323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 [2024-11-19 21:19:11.694341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 [2024-11-19 21:19:11.694359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 [2024-11-19 21:19:11.694414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 [2024-11-19 21:19:11.694431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 [2024-11-19 21:19:11.694469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.048 [2024-11-19 21:19:11.694487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.048 [2024-11-19 21:19:11.694498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.048 [2024-11-19 21:19:11.694505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same [2024-11-19 21:19:11.694521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:38.049 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.049 [2024-11-19 21:19:11.694542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.049 [2024-11-19 21:19:11.694560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.049 [2024-11-19 21:19:11.694578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.049 [2024-11-19 21:19:11.694612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.049 [2024-11-19 21:19:11.694630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:1[2024-11-19 21:19:11.694647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.049 with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.049 [2024-11-19 21:19:11.694689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.049 [2024-11-19 21:19:11.694706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.049 [2024-11-19 21:19:11.694724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.049 [2024-11-19 21:19:11.694760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.049 [2024-11-19 21:19:11.694777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.049 [2024-11-19 21:19:11.694811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.049 [2024-11-19 21:19:11.694829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.049 [2024-11-19 21:19:11.694864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.049 [2024-11-19 21:19:11.694882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1[2024-11-19 21:19:11.694899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.049 with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same [2024-11-19 21:19:11.694921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:38.049 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.049 [2024-11-19 21:19:11.694943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.049 [2024-11-19 21:19:11.694967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.694972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.049 [2024-11-19 21:19:11.694986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.695004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.695003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.049 [2024-11-19 21:19:11.695026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.049 [2024-11-19 21:19:11.695051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.049 [2024-11-19 21:19:11.695109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.049 [2024-11-19 21:19:11.695138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.049 [2024-11-19 21:19:11.695161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.049 [2024-11-19 21:19:11.695187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.049 [2024-11-19 21:19:11.695210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.049 [2024-11-19 21:19:11.695235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.049 [2024-11-19 21:19:11.695258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.049 [2024-11-19 21:19:11.695321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:38.049 [2024-11-19 21:19:11.698269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.049 [2024-11-19 21:19:11.698823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.698841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.698859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.698877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.698895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.698913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.698931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.698949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.698966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.698984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.699566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.701473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:38.050 [2024-11-19 21:19:11.701536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:38.050 [2024-11-19 21:19:11.701620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:38.050 [2024-11-19 21:19:11.701667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:38.050 [2024-11-19 21:19:11.701722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:38.050 [2024-11-19 21:19:11.701810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-11-19 21:19:11.701806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same id:0 cdw10:00000000 cdw11:00000000 00:29:38.050 with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.701845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.050 [2024-11-19 21:19:11.701847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.701868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.701869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.050 [2024-11-19 21:19:11.701888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.701891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.050 [2024-11-19 21:19:11.701907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.701914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.050 [2024-11-19 21:19:11.701925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.701936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.050 [2024-11-19 21:19:11.701944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.701958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.050 [2024-11-19 21:19:11.701962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.701979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-19 21:19:11.701981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.050 with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.702002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same [2024-11-19 21:19:11.702001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is with the state(6) to be set 00:29:38.050 same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.702028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.702047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.702064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.702084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.050 [2024-11-19 21:19:11.702095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.702115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.050 [2024-11-19 21:19:11.702119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.050 [2024-11-19 21:19:11.702139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.051 [2024-11-19 21:19:11.702157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-19 21:19:11.702175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.051 with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.051 [2024-11-19 21:19:11.702223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.051 [2024-11-19 21:19:11.702241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.051 [2024-11-19 21:19:11.702275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.051 [2024-11-19 21:19:11.702292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same [2024-11-19 21:19:11.702347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 with the state(6) to be set 00:29:38.051 (9): Bad file descriptor 00:29:38.051 [2024-11-19 21:19:11.702381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:38.051 [2024-11-19 21:19:11.702432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.051 [2024-11-19 21:19:11.702527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.051 [2024-11-19 21:19:11.702565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-11-19 21:19:11.702545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same id:0 cdw10:00000000 cdw11:00000000 00:29:38.051 with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-19 21:19:11.702592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.051 with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.051 [2024-11-19 21:19:11.702631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.051 [2024-11-19 21:19:11.702648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.051 [2024-11-19 21:19:11.702665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-19 21:19:11.702683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.051 with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same [2024-11-19 21:19:11.702702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is with the state(6) to be set 00:29:38.051 same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.051 [2024-11-19 21:19:11.702795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.051 [2024-11-19 21:19:11.702813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.051 [2024-11-19 21:19:11.702847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.051 [2024-11-19 21:19:11.702865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.051 [2024-11-19 21:19:11.702883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.051 [2024-11-19 21:19:11.702900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.051 [2024-11-19 21:19:11.702935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.051 [2024-11-19 21:19:11.702952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.702987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.703008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.703026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.703042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.703060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.703130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.704562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.704596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.704617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.704635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.704652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.704670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.051 [2024-11-19 21:19:11.704688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.704750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.704775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.704793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.704810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.704838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.704856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.704874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.704891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.704908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.704926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.704943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.704960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.704978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.704995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-11-19 21:19:11.705600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3900 with addr=10.0.0.2, port=4420 00:29:38.052 [2024-11-19 21:19:11.705657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-11-19 21:19:11.705781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:38.052 [2024-11-19 21:19:11.705827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:38.052 [2024-11-19 21:19:11.705924] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:38.052 [2024-11-19 21:19:11.706014] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:38.052 [2024-11-19 21:19:11.706365] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:38.052 [2024-11-19 21:19:11.706468] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:38.052 [2024-11-19 21:19:11.706742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:38.052 [2024-11-19 21:19:11.706780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:38.052 [2024-11-19 21:19:11.706859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.052 [2024-11-19 21:19:11.706896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.052 [2024-11-19 21:19:11.706934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.052 [2024-11-19 21:19:11.706959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.052 [2024-11-19 21:19:11.706986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.052 [2024-11-19 21:19:11.707009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.052 [2024-11-19 21:19:11.707036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.052 [2024-11-19 21:19:11.707059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.052 [2024-11-19 21:19:11.707097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.052 [2024-11-19 21:19:11.707130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.052 [2024-11-19 21:19:11.707156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.052 [2024-11-19 21:19:11.707179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.052 [2024-11-19 21:19:11.707205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.052 [2024-11-19 21:19:11.707228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.052 [2024-11-19 21:19:11.707253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.707276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.707302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.707326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.707352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.707375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.707402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.707441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.707467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.707489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.707514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.707536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.707569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.707593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.707618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.707640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.707665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.707688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.707714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.707737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.707763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.707786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.707811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.707834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.707860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.707882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.707919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.707941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.707966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.707989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.708015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.708037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.708062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.708118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.708147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.708170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.708196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.708224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.708251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.708274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.708299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.708333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.708359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.708382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.708424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.708447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.708473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.708495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.708519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.708542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.708567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.708603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.708630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.708652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.708679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.708701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.708727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.708749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.708774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.708796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.708822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.708845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.708874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.708896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.708922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.708944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.708969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.708991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.709016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.709038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.709063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.709097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.709134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.709157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.709183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.709205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.709230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.709253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.053 [2024-11-19 21:19:11.709278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.053 [2024-11-19 21:19:11.709300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.709336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.709358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.709385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.709407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.709432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.709455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.709480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.709506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.709533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.709555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.709581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.709603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.709628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.709650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.709676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.709698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.709724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.709745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.709770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.709793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.709817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.709839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.709864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.709892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.709918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.709940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.709965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.709987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.710011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.710033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.710057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.710116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.710149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.710173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.710196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9f80 is same with the state(6) to be set 00:29:38.054 [2024-11-19 21:19:11.710668] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:38.054 [2024-11-19 21:19:11.710892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.710922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.710957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.710982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.711008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.711031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.711057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.711094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.711121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.711145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.711171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.711193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.711219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.711242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.711268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.711291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.711317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.711340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.711381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.711405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.711430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.711452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.711483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.711505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.711531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.711554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.711579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.711600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.711625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.711647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.711672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.054 [2024-11-19 21:19:11.711694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.054 [2024-11-19 21:19:11.711719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.711741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.711766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.711789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.711815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.711837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.711863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.711886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.711911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.711933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.711958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.711981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.712016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.712039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.712090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.712126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.712153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.712176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.712202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.712225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.712251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.712273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.712299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.712321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.712348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.712371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.712396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.712433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.712460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.712483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.712508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.712543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.712570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.712593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.712618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.712640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.712665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.712687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.712713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.712735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.712764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.712787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.712812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.712835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.712863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.712886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.712912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.712934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.712960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.712983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.713008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.713047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.713081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.713114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.713141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.713165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.713191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.713214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.713240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.713263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.713289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.713312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.728593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.728699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.728730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.728762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.728790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.728814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.728842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.728875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.728902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.728926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.728963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.728986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.729013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.729035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.729061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.729096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.055 [2024-11-19 21:19:11.729134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.055 [2024-11-19 21:19:11.729158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.729185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.729208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.729234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.729258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.729283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.729306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.729332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.729354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.729396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.729418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.729466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.729489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.729515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.729539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.729564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.729588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.729613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb600 is same with the state(6) to be set 00:29:38.056 [2024-11-19 21:19:11.730018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:38.056 [2024-11-19 21:19:11.730048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:38.056 [2024-11-19 21:19:11.730086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:38.056 [2024-11-19 21:19:11.730126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:38.056 [2024-11-19 21:19:11.730156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:38.056 [2024-11-19 21:19:11.730176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:38.056 [2024-11-19 21:19:11.730197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:38.056 [2024-11-19 21:19:11.730216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:38.056 [2024-11-19 21:19:11.730383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.056 [2024-11-19 21:19:11.730414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.730439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.056 [2024-11-19 21:19:11.730461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.730484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.056 [2024-11-19 21:19:11.730506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.730529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:38.056 [2024-11-19 21:19:11.730551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.730571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:38.056 [2024-11-19 21:19:11.730622] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:38.056 [2024-11-19 21:19:11.730666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:38.056 [2024-11-19 21:19:11.730721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:29:38.056 [2024-11-19 21:19:11.730773] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:38.056 [2024-11-19 21:19:11.730821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:38.056 [2024-11-19 21:19:11.730871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:38.056 [2024-11-19 21:19:11.750367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:38.056 [2024-11-19 21:19:11.750483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:38.056 [2024-11-19 21:19:11.750528] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:38.056 [2024-11-19 21:19:11.750582] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:38.056 [2024-11-19 21:19:11.750629] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:38.056 [2024-11-19 21:19:11.750662] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:38.056 [2024-11-19 21:19:11.750788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.750822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.750860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.750885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.750912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.750936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.750963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.750987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.751014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.751036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.751062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.751096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.751123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.751148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.751175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.751204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.751231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.751256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.751282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.751306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.751332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.751356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.751382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.751405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.751431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.751454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.751480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.751514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.751540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.751563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.056 [2024-11-19 21:19:11.751589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.056 [2024-11-19 21:19:11.751613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.751639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.751718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.751748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.751773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.751799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.751822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.751849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.751872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.751904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.751929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.751955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.751978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.752005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.752028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.752056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.752088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.752123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.752147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.752173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.752195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.752222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.752245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.752270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.752294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.752320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.752343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.752369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.752392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.752429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.752453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.752479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.752510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.752537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.752559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.752590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.752614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.752639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.752663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.752688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.752712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.752738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.752762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.752788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.752811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.752838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.752861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.752886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.752910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.752936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.752960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.752986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.753009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.753034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.753058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.753094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.753119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.753145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.753168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.753194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.753222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.753249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.753274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.753300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.753325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.753351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.753374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.753399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.753422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.753447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.753470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.753496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.753519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.753544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.753567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.753593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.753617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.057 [2024-11-19 21:19:11.753643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.057 [2024-11-19 21:19:11.753666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.753691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.753715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.753741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.753765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.753792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.753815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.753846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.753870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.753896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.753919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.753945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.753967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.753994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.754017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.754042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.754065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.754101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.754126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.754150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa200 is same with the state(6) to be set 00:29:38.058 [2024-11-19 21:19:11.755778] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:38.058 [2024-11-19 21:19:11.755941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:38.058 task offset: 24576 on job bdev=Nvme3n1 fails 00:29:38.058 1423.78 IOPS, 88.99 MiB/s [2024-11-19T20:19:11.853Z] [2024-11-19 21:19:11.756007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:38.058 [2024-11-19 21:19:11.756036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:38.058 [2024-11-19 21:19:11.756287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.058 [2024-11-19 21:19:11.756329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:38.058 [2024-11-19 21:19:11.756355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:38.058 [2024-11-19 21:19:11.756999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.757032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.757067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.757104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.757132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.757155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.757187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.757212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.757238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.757262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.757288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.757312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.757338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.757377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.757427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.757451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.757477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.757501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.757526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.757549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.757575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.757598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.757624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.757656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.757683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.757726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.757754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.757778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.757804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.757828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.757855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.757882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.757910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.757944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.757970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.757994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.758021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.758044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.758077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.758103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.758129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.758153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.758179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.758202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.758229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.758252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.758279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.758302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.758327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.758351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.058 [2024-11-19 21:19:11.758377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.058 [2024-11-19 21:19:11.758399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.758426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.758449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.758474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.758497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.758530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.758556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.758582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.758605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.758631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.758654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.758680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.758703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.758729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.758752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.758780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.758803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.758830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.758852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.758878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.758901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.758928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.758951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.758977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.759000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.759026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.759051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.759086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.759111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.759138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.759170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.759198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.759221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.759247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.759270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.759295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.759318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.759345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.759367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.759394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.759416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.759443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.759466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.759492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.759516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.759541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.759564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.759591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.759614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.759641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.759664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.759691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.759715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.759741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.759764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.759796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.759820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.759847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.759871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.759899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.759923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.759950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.759973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.760001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.760024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.760049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.760080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.760108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.059 [2024-11-19 21:19:11.760132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.059 [2024-11-19 21:19:11.760158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.760181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.760207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.760231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.760257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.760280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.760306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.760329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.760352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa980 is same with the state(6) to be set 00:29:38.060 [2024-11-19 21:19:11.762506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:38.060 [2024-11-19 21:19:11.762553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:38.060 [2024-11-19 21:19:11.762774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.060 [2024-11-19 21:19:11.762814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7f00 with addr=10.0.0.2, port=4420 00:29:38.060 [2024-11-19 21:19:11.762840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:38.060 [2024-11-19 21:19:11.763012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.060 [2024-11-19 21:19:11.763048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:38.060 [2024-11-19 21:19:11.763081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:38.060 [2024-11-19 21:19:11.763198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.060 [2024-11-19 21:19:11.763233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3900 with addr=10.0.0.2, port=4420 00:29:38.060 [2024-11-19 21:19:11.763257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:38.060 [2024-11-19 21:19:11.763285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:38.060 [2024-11-19 21:19:11.763352] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:38.060 [2024-11-19 21:19:11.763391] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:29:38.060 [2024-11-19 21:19:11.763425] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:38.060 [2024-11-19 21:19:11.763460] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:38.060 [2024-11-19 21:19:11.764238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.764272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.764307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.764332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.764359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.764382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.764409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.764432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.764473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.764497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.764524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.764549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.764575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.764604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.764631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.764655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.764680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.764705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.764731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.764755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.764780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.764804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.764830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.764853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.764879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.764902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.764927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.764951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.764976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.764999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.765025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.765048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.765084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.765110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.765137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.765160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.765186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.765210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.765240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.765264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.765290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.765313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.765338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.765362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.765405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.765427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.765452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.765474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.765499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.765523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.765548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.765570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.765610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.060 [2024-11-19 21:19:11.765635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.060 [2024-11-19 21:19:11.765661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.765683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.765709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.765733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.765758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.765782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.765815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.765839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.765864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.765894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.765921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.765945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.765971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.765994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.766021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.766045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.766077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.766103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.766141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.766166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.766194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.766217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.766243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.766267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.766293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.766317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.766343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.766367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.766393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.766417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.766443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.766466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.766492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.766516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.766547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.766572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.766613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.766637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.766662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.766685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.766709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.766747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.766773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.766797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.766823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.766846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.766872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.766894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.766919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.766943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.766968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.766991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.767017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.767040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.767066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.767098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.767126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.767150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.767176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.767203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.767231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.767255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.767282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.767305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.767330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.767353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.767379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.767403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.767443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.767467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.767507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.767531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.767558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.767581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.767605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fac00 is same with the state(6) to be set 00:29:38.061 [2024-11-19 21:19:11.769143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.769175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.769207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.061 [2024-11-19 21:19:11.769231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.061 [2024-11-19 21:19:11.769258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.769282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.769308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.769347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.769376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.769405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.769434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.769458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.769484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.769507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.769533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.769557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.769585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.769607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.769634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.769657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.769683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.769705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.769732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.769755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.769782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.769805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.769832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.769855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.769881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.769904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.769930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.769954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.769981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.770004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.770036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.770060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.770098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.770122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.770148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.770172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.770198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.770221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.770248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.770271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.770298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.770321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.770348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.770370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.770396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.770419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.770446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.770470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.770497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.770520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.770560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.770583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.770624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.770647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.770675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.770702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.770729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.770753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.770778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.770802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.770828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.770852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.770878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.770902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.770928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.770951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.770977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.771002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.771029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.771052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.771087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.771113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.771139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.771163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.771190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.771213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.771238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.771262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.062 [2024-11-19 21:19:11.771289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.062 [2024-11-19 21:19:11.771312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.771342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.771366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.771392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.771415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.771441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.771464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.771490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.771513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.771539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.771562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.771588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.771611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.771637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.771660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.771686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.771709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.771735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.771758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.771784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.771807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.771850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.771872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.771913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.771938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.771965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.771992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.772019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.772043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.772076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.772102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.772128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.772151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.772177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.772200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.772226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.772249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.772275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.772298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.772325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.772347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.772373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.772396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.772438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.772460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.772483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fae80 is same with the state(6) to be set 00:29:38.063 [2024-11-19 21:19:11.774027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.774058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.774100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.774125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.774153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.774181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.774223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.774247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.774274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.774297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.774323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.774347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.774373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.774397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.774423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.774446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.774472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.774495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.774521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.774545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.774571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.774593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.063 [2024-11-19 21:19:11.774620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.063 [2024-11-19 21:19:11.774643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.774670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.774693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.774719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.774757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.774783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.774806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.774837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.774876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.774903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.774926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.774953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.774976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.775003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.775026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.775052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.775083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.775112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.775136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.775163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.775186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.775212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.775235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.775261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.775285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.775311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.775335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.775360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.775384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.775418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.775441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.775467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.775493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.775520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.775545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.775571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.775594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.775620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.775645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.775670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.775705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.775730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.775764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.775791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.775814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.775841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.775863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.775889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.775912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.775938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.775962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.775997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.776020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.776046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.776076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.776105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.776129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.776160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.776184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.776210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.776233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.776260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.776282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.776307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.776330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.776356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.776380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.776405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.776429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.776455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.776478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.776505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.776528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.776555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.776578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.776604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.776627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.064 [2024-11-19 21:19:11.776653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.064 [2024-11-19 21:19:11.776676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.776702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.776725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.776752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.776780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.776807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.776829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.776856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.776879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.776905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.776928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.776953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.776977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.777004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.777027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.777053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.777085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.777114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.777137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.777164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.777187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.777213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.777235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.777262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.777285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.777312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.777335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.777359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb100 is same with the state(6) to be set 00:29:38.065 [2024-11-19 21:19:11.778975] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:38.065 [2024-11-19 21:19:11.779197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:38.065 [2024-11-19 21:19:11.779244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:38.065 [2024-11-19 21:19:11.779287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:38.065 [2024-11-19 21:19:11.779535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.065 [2024-11-19 21:19:11.779585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:29:38.065 [2024-11-19 21:19:11.779612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:38.065 [2024-11-19 21:19:11.779771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.065 [2024-11-19 21:19:11.779805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4d00 with addr=10.0.0.2, port=4420 00:29:38.065 [2024-11-19 21:19:11.779828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:38.065 [2024-11-19 21:19:11.779863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:38.065 [2024-11-19 21:19:11.779896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:38.065 [2024-11-19 21:19:11.779927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:38.065 [2024-11-19 21:19:11.779955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:38.065 [2024-11-19 21:19:11.779976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:38.065 [2024-11-19 21:19:11.780001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:38.065 [2024-11-19 21:19:11.780026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:38.065 [2024-11-19 21:19:11.780061] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:38.065 [2024-11-19 21:19:11.780107] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:38.065 [2024-11-19 21:19:11.780139] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:38.065 [2024-11-19 21:19:11.780223] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:38.065 [2024-11-19 21:19:11.780271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:38.065 [2024-11-19 21:19:11.780312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:38.065 [2024-11-19 21:19:11.781172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.781209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.781258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.781287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.781317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.781341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.781374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.781399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.781426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.781450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.781477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.781501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.781528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.781562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.781588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.781611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.781637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.781661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.781689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.781712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.781739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.781762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.781788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.781811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.781839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.781862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.781889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.781911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.781939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.065 [2024-11-19 21:19:11.781962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.065 [2024-11-19 21:19:11.782000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.782028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.782055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.782089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.782117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.782141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.782169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.782193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.782220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.782243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.782270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.782293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.782320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.782342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.782370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.782393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.782420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.782445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.782471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.782496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.782523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.782547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.782575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.782599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.782627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.782651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.782683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.782708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.782735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.782759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.782788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.782812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.782840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.782864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.782891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.782915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.782942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.782966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.782994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.783017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.783045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.783078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.783108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.783132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.783160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.783184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.783212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.783235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.783263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.783287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.783315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.783343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.783373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.783396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.783423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.783447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.783474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.783498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.783525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.783550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.783576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.783599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.783627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.783649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.783676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.783699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.783727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.783750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.783778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.783801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.783828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.783851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.783878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.783901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.783928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.783951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.783983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.784006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.066 [2024-11-19 21:19:11.784033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.066 [2024-11-19 21:19:11.784056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.067 [2024-11-19 21:19:11.784107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.067 [2024-11-19 21:19:11.784131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.067 [2024-11-19 21:19:11.784159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.067 [2024-11-19 21:19:11.784183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.067 [2024-11-19 21:19:11.784209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.067 [2024-11-19 21:19:11.784232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.067 [2024-11-19 21:19:11.784259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.067 [2024-11-19 21:19:11.784282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.067 [2024-11-19 21:19:11.784308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.067 [2024-11-19 21:19:11.784331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.067 [2024-11-19 21:19:11.784357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.067 [2024-11-19 21:19:11.784381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.067 [2024-11-19 21:19:11.784408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.067 [2024-11-19 21:19:11.784432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.067 [2024-11-19 21:19:11.784458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.067 [2024-11-19 21:19:11.784483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.067 [2024-11-19 21:19:11.784509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.067 [2024-11-19 21:19:11.784533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:38.067 [2024-11-19 21:19:11.784556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb380 is same with the state(6) to be set 00:29:38.067 00:29:38.067 Latency(us) 00:29:38.067 [2024-11-19T20:19:11.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.067 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.067 Job: Nvme1n1 ended in about 1.02 seconds with error 00:29:38.067 Verification LBA range: start 0x0 length 0x400 00:29:38.067 Nvme1n1 : 1.02 131.89 8.24 62.53 0.00 325759.17 22039.51 302921.96 00:29:38.067 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.067 Job: Nvme2n1 ended in about 1.05 seconds with error 00:29:38.067 Verification LBA range: start 0x0 length 0x400 00:29:38.067 Nvme2n1 : 1.05 122.48 7.65 61.24 0.00 338246.42 20194.80 301368.51 00:29:38.067 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.067 Job: Nvme3n1 ended in about 0.99 seconds with error 00:29:38.067 Verification LBA range: start 0x0 length 0x400 00:29:38.067 Nvme3n1 : 0.99 194.00 12.13 64.67 0.00 234849.85 14563.56 268746.15 00:29:38.067 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.067 Job: Nvme4n1 ended in about 0.99 seconds with error 00:29:38.067 Verification LBA range: start 0x0 length 0x400 00:29:38.067 Nvme4n1 : 0.99 193.78 12.11 64.59 0.00 230255.41 8301.23 302921.96 00:29:38.067 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.067 Job: Nvme5n1 ended in about 1.05 seconds with error 00:29:38.067 Verification LBA range: start 0x0 length 0x400 00:29:38.067 Nvme5n1 : 1.05 121.76 7.61 60.88 0.00 320752.20 41360.50 282727.16 00:29:38.067 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.067 Job: Nvme6n1 ended in about 1.06 seconds with error 00:29:38.067 Verification LBA range: start 0x0 length 0x400 00:29:38.067 Nvme6n1 : 1.06 120.93 7.56 60.47 0.00 316508.54 24175.50 302921.96 00:29:38.067 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.067 Job: Nvme7n1 ended in about 1.06 seconds with error 00:29:38.067 Verification LBA range: start 0x0 length 0x400 00:29:38.067 Nvme7n1 : 1.06 125.08 7.82 60.19 0.00 303832.73 18641.35 301368.51 00:29:38.067 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.067 Job: Nvme8n1 ended in about 1.07 seconds with error 00:29:38.067 Verification LBA range: start 0x0 length 0x400 00:29:38.067 Nvme8n1 : 1.07 119.83 7.49 59.91 0.00 306800.51 38836.15 351078.78 00:29:38.067 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.067 Job: Nvme9n1 ended in about 1.08 seconds with error 00:29:38.067 Verification LBA range: start 0x0 length 0x400 00:29:38.067 Nvme9n1 : 1.08 119.02 7.44 59.51 0.00 302559.64 23690.05 310689.19 00:29:38.067 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.067 Job: Nvme10n1 ended in about 1.04 seconds with error 00:29:38.067 Verification LBA range: start 0x0 length 0x400 00:29:38.067 Nvme10n1 : 1.04 123.08 7.69 61.54 0.00 284494.51 22622.06 330883.98 00:29:38.067 [2024-11-19T20:19:11.862Z] =================================================================================================================== 00:29:38.067 [2024-11-19T20:19:11.862Z] Total : 1371.85 85.74 615.53 0.00 292556.09 8301.23 351078.78 00:29:38.327 [2024-11-19 21:19:11.877792] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:38.327 [2024-11-19 21:19:11.877900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:38.327 [2024-11-19 21:19:11.878315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.327 [2024-11-19 21:19:11.878368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5700 with addr=10.0.0.2, port=4420 00:29:38.327 [2024-11-19 21:19:11.878398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:38.327 [2024-11-19 21:19:11.878566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.327 [2024-11-19 21:19:11.878601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6100 with addr=10.0.0.2, port=4420 00:29:38.327 [2024-11-19 21:19:11.878625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:29:38.327 [2024-11-19 21:19:11.878769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.327 [2024-11-19 21:19:11.878813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6b00 with addr=10.0.0.2, port=4420 00:29:38.327 [2024-11-19 21:19:11.878838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:29:38.327 [2024-11-19 21:19:11.878869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:38.327 [2024-11-19 21:19:11.878891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:38.327 [2024-11-19 21:19:11.878915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:38.327 [2024-11-19 21:19:11.878939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:38.327 [2024-11-19 21:19:11.878965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:38.327 [2024-11-19 21:19:11.878985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:38.327 [2024-11-19 21:19:11.879005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:38.327 [2024-11-19 21:19:11.879026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:38.327 [2024-11-19 21:19:11.879063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:38.327 [2024-11-19 21:19:11.879092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:38.327 [2024-11-19 21:19:11.879129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:38.327 [2024-11-19 21:19:11.879149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:38.327 [2024-11-19 21:19:11.879247] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:38.327 [2024-11-19 21:19:11.879283] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:29:38.327 [2024-11-19 21:19:11.879323] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:38.327 [2024-11-19 21:19:11.879362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:29:38.327 [2024-11-19 21:19:11.879421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:38.327 [2024-11-19 21:19:11.879456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:38.327 [2024-11-19 21:19:11.881381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:38.327 [2024-11-19 21:19:11.881695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.327 [2024-11-19 21:19:11.881734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:29:38.327 [2024-11-19 21:19:11.881759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:38.327 [2024-11-19 21:19:11.881789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:38.327 [2024-11-19 21:19:11.881810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:38.328 [2024-11-19 21:19:11.881830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:38.328 [2024-11-19 21:19:11.881852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:38.328 [2024-11-19 21:19:11.881881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:38.328 [2024-11-19 21:19:11.881901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:38.328 [2024-11-19 21:19:11.881920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:38.328 [2024-11-19 21:19:11.881940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:38.328 [2024-11-19 21:19:11.881998] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:38.328 [2024-11-19 21:19:11.882031] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:38.328 [2024-11-19 21:19:11.882061] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:38.328 [2024-11-19 21:19:11.882144] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:38.328 [2024-11-19 21:19:11.882177] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:38.328 [2024-11-19 21:19:11.882207] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:29:38.328 [2024-11-19 21:19:11.883192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:38.328 [2024-11-19 21:19:11.883246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:38.328 [2024-11-19 21:19:11.883278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:38.328 [2024-11-19 21:19:11.883566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.328 [2024-11-19 21:19:11.883606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:38.328 [2024-11-19 21:19:11.883633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:38.328 [2024-11-19 21:19:11.883663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:38.328 [2024-11-19 21:19:11.883691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:38.328 [2024-11-19 21:19:11.883713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:38.328 [2024-11-19 21:19:11.883734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:38.328 [2024-11-19 21:19:11.883755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:38.328 [2024-11-19 21:19:11.883777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:38.328 [2024-11-19 21:19:11.883797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:38.328 [2024-11-19 21:19:11.883817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:38.328 [2024-11-19 21:19:11.883837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:38.328 [2024-11-19 21:19:11.883860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:38.328 [2024-11-19 21:19:11.883879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:38.328 [2024-11-19 21:19:11.883904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:38.328 [2024-11-19 21:19:11.883926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:38.328 [2024-11-19 21:19:11.884166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:38.328 [2024-11-19 21:19:11.884203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:38.328 [2024-11-19 21:19:11.884360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.328 [2024-11-19 21:19:11.884397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3900 with addr=10.0.0.2, port=4420 00:29:38.328 [2024-11-19 21:19:11.884423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:38.328 [2024-11-19 21:19:11.884542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.328 [2024-11-19 21:19:11.884578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:38.328 [2024-11-19 21:19:11.884602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:38.328 [2024-11-19 21:19:11.884724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.328 [2024-11-19 21:19:11.884758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7f00 with addr=10.0.0.2, port=4420 00:29:38.328 [2024-11-19 21:19:11.884782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:38.328 [2024-11-19 21:19:11.884811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:38.328 [2024-11-19 21:19:11.884838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:38.328 [2024-11-19 21:19:11.884859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:38.328 [2024-11-19 21:19:11.884879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:38.328 [2024-11-19 21:19:11.884900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:38.328 [2024-11-19 21:19:11.885065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.328 [2024-11-19 21:19:11.885119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4d00 with addr=10.0.0.2, port=4420 00:29:38.328 [2024-11-19 21:19:11.885143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:38.328 [2024-11-19 21:19:11.885241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.328 [2024-11-19 21:19:11.885274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:29:38.328 [2024-11-19 21:19:11.885299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:38.328 [2024-11-19 21:19:11.885328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:38.328 [2024-11-19 21:19:11.885369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:38.328 [2024-11-19 21:19:11.885398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:38.328 [2024-11-19 21:19:11.885424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:38.328 [2024-11-19 21:19:11.885445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:38.328 [2024-11-19 21:19:11.885471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:38.328 [2024-11-19 21:19:11.885493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:38.328 [2024-11-19 21:19:11.885563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:38.328 [2024-11-19 21:19:11.885598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:38.328 [2024-11-19 21:19:11.885624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:38.328 [2024-11-19 21:19:11.885646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:38.328 [2024-11-19 21:19:11.885667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:38.328 [2024-11-19 21:19:11.885686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:38.328 [2024-11-19 21:19:11.885710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:38.328 [2024-11-19 21:19:11.885730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:38.328 [2024-11-19 21:19:11.885749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:38.328 [2024-11-19 21:19:11.885768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:38.328 [2024-11-19 21:19:11.885790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:38.328 [2024-11-19 21:19:11.885809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:38.328 [2024-11-19 21:19:11.885828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:38.328 [2024-11-19 21:19:11.885847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:38.328 [2024-11-19 21:19:11.885945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:38.328 [2024-11-19 21:19:11.885973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:38.328 [2024-11-19 21:19:11.885994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:38.328 [2024-11-19 21:19:11.886016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:38.328 [2024-11-19 21:19:11.886038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:38.328 [2024-11-19 21:19:11.886057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:38.328 [2024-11-19 21:19:11.886108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:38.328 [2024-11-19 21:19:11.886131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:40.858 21:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3086444 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3086444 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3086444 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:41.795 rmmod nvme_tcp 00:29:41.795 rmmod nvme_fabrics 00:29:41.795 rmmod nvme_keyring 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:41.795 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:41.796 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3086141 ']' 00:29:41.796 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3086141 00:29:41.796 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3086141 ']' 00:29:41.796 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3086141 00:29:41.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3086141) - No such process 00:29:41.796 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3086141 is not found' 00:29:41.796 Process with pid 3086141 is not found 00:29:41.796 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:41.796 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:41.796 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:41.796 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:41.796 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:41.796 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:41.796 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:41.796 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:41.796 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:41.796 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.796 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.796 21:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:44.329 00:29:44.329 real 0m11.626s 00:29:44.329 user 0m34.365s 00:29:44.329 sys 0m1.967s 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:44.329 ************************************ 00:29:44.329 END TEST nvmf_shutdown_tc3 00:29:44.329 ************************************ 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:44.329 ************************************ 00:29:44.329 START TEST nvmf_shutdown_tc4 00:29:44.329 ************************************ 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:44.329 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:44.329 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:44.329 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:44.330 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:44.330 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:44.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:44.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:29:44.330 00:29:44.330 --- 10.0.0.2 ping statistics --- 00:29:44.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.330 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:44.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:44.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:29:44.330 00:29:44.330 --- 10.0.0.1 ping statistics --- 00:29:44.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.330 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3087630 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3087630 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3087630 ']' 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:44.330 21:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:44.330 [2024-11-19 21:19:17.941163] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:29:44.330 [2024-11-19 21:19:17.941306] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:44.589 [2024-11-19 21:19:18.129179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:44.589 [2024-11-19 21:19:18.265559] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:44.589 [2024-11-19 21:19:18.265634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:44.589 [2024-11-19 21:19:18.265653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:44.589 [2024-11-19 21:19:18.265672] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:44.589 [2024-11-19 21:19:18.265686] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:44.589 [2024-11-19 21:19:18.268136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:44.589 [2024-11-19 21:19:18.268254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:44.589 [2024-11-19 21:19:18.268302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.589 [2024-11-19 21:19:18.268309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:45.523 21:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:45.523 21:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:45.523 21:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:45.523 21:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:45.523 21:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:45.523 21:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:45.523 21:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:45.524 21:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.524 21:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:45.524 [2024-11-19 21:19:18.978944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:45.524 21:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.524 21:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:45.524 21:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:45.524 21:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:45.524 21:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.524 21:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:45.524 Malloc1 00:29:45.524 [2024-11-19 21:19:19.118212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:45.524 Malloc2 00:29:45.524 Malloc3 00:29:45.783 Malloc4 00:29:45.783 Malloc5 00:29:46.041 Malloc6 00:29:46.041 Malloc7 00:29:46.041 Malloc8 00:29:46.299 Malloc9 00:29:46.299 Malloc10 00:29:46.299 21:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.299 21:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:46.299 21:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:46.299 21:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:46.299 21:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3087943 00:29:46.299 21:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:46.299 21:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:46.557 [2024-11-19 21:19:20.181782] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:51.838 21:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:51.838 21:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3087630 00:29:51.838 21:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3087630 ']' 00:29:51.838 21:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3087630 00:29:51.838 21:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:51.838 21:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:51.838 21:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3087630 00:29:51.838 21:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:51.838 21:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:51.838 21:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3087630' 00:29:51.838 killing process with pid 3087630 00:29:51.838 21:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3087630 00:29:51.838 21:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3087630 00:29:51.838 [2024-11-19 21:19:25.108834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:51.838 [2024-11-19 21:19:25.108926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:51.838 [2024-11-19 21:19:25.108961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:51.838 [2024-11-19 21:19:25.108981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:51.838 [2024-11-19 21:19:25.108999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:51.838 [2024-11-19 21:19:25.109018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:51.838 [2024-11-19 21:19:25.109531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:51.838 [2024-11-19 21:19:25.109578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 [2024-11-19 21:19:25.110877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.838 starting I/O failed: -6 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 [2024-11-19 21:19:25.112501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 [2024-11-19 21:19:25.112548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:51.838 Write completed with error (sct=0, sc=8) 00:29:51.838 starting I/O failed: -6 00:29:51.838 [2024-11-19 21:19:25.112572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:51.838 [2024-11-19 21:19:25.112593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:51.839 [2024-11-19 21:19:25.112611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 [2024-11-19 21:19:25.112628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:51.839 [2024-11-19 21:19:25.112646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:51.839 [2024-11-19 21:19:25.112663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 [2024-11-19 21:19:25.114499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005c80 is same with the state(6) to be set 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 [2024-11-19 21:19:25.114539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005c80 is same with the state(6) to be set 00:29:51.839 [2024-11-19 21:19:25.114562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005c80 is same with the state(6) to be set 00:29:51.839 [2024-11-19 21:19:25.114581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005c80 is same with the state(6) to be set 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 [2024-11-19 21:19:25.114600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005c80 is same with the state(6) to be set 00:29:51.839 [2024-11-19 21:19:25.114617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005c80 is same Write completed with error (sct=0, sc=8) 00:29:51.839 with the state(6) to be set 00:29:51.839 starting I/O failed: -6 00:29:51.839 [2024-11-19 21:19:25.114637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005c80 is same with the state(6) to be set 00:29:51.839 [2024-11-19 21:19:25.114655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005c80 is same with the state(6) to be set 00:29:51.839 [2024-11-19 21:19:25.114672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005c80 is same with the state(6) to be set 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 [2024-11-19 21:19:25.115503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:51.839 NVMe io qpair process completion error 00:29:51.839 [2024-11-19 21:19:25.115651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(6) to be set 00:29:51.839 [2024-11-19 21:19:25.115698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(6) to be set 00:29:51.839 [2024-11-19 21:19:25.115722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(6) to be set 00:29:51.839 [2024-11-19 21:19:25.115741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(6) to be set 00:29:51.839 [2024-11-19 21:19:25.115759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(6) to be set 00:29:51.839 [2024-11-19 21:19:25.115777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(6) to be set 00:29:51.839 [2024-11-19 21:19:25.115794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(6) to be set 00:29:51.839 [2024-11-19 21:19:25.115811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(6) to be set 00:29:51.839 [2024-11-19 21:19:25.115847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(6) to be set 00:29:51.839 [2024-11-19 21:19:25.121143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(6) to be set 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 [2024-11-19 21:19:25.121200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(6) to be set 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 [2024-11-19 21:19:25.122407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.839 Write completed with error (sct=0, sc=8) 00:29:51.839 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 [2024-11-19 21:19:25.124387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 [2024-11-19 21:19:25.127197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:51.840 NVMe io qpair process completion error 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 [2024-11-19 21:19:25.129168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:51.840 starting I/O failed: -6 00:29:51.840 starting I/O failed: -6 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 Write completed with error (sct=0, sc=8) 00:29:51.840 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 [2024-11-19 21:19:25.131354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 [2024-11-19 21:19:25.133980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.841 Write completed with error (sct=0, sc=8) 00:29:51.841 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 [2024-11-19 21:19:25.143496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:51.842 NVMe io qpair process completion error 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 [2024-11-19 21:19:25.145583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 [2024-11-19 21:19:25.147891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.842 Write completed with error (sct=0, sc=8) 00:29:51.842 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 [2024-11-19 21:19:25.150497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 [2024-11-19 21:19:25.160559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:51.843 NVMe io qpair process completion error 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 Write completed with error (sct=0, sc=8) 00:29:51.843 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 [2024-11-19 21:19:25.162775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 [2024-11-19 21:19:25.164756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 [2024-11-19 21:19:25.167496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.844 starting I/O failed: -6 00:29:51.844 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 [2024-11-19 21:19:25.180099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:51.845 NVMe io qpair process completion error 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 [2024-11-19 21:19:25.181764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 [2024-11-19 21:19:25.183964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.845 starting I/O failed: -6 00:29:51.845 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 [2024-11-19 21:19:25.186655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 [2024-11-19 21:19:25.199250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:51.846 NVMe io qpair process completion error 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 starting I/O failed: -6 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.846 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 [2024-11-19 21:19:25.201030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 [2024-11-19 21:19:25.202964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 [2024-11-19 21:19:25.205654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.847 Write completed with error (sct=0, sc=8) 00:29:51.847 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 [2024-11-19 21:19:25.222531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:51.848 NVMe io qpair process completion error 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 [2024-11-19 21:19:25.224575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.848 starting I/O failed: -6 00:29:51.848 [2024-11-19 21:19:25.226790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.848 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 [2024-11-19 21:19:25.229429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 [2024-11-19 21:19:25.238850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:51.849 NVMe io qpair process completion error 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.849 starting I/O failed: -6 00:29:51.849 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 [2024-11-19 21:19:25.240764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 [2024-11-19 21:19:25.242912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 [2024-11-19 21:19:25.245651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.850 Write completed with error (sct=0, sc=8) 00:29:51.850 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 [2024-11-19 21:19:25.255006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:51.851 NVMe io qpair process completion error 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 starting I/O failed: -6 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 Write completed with error (sct=0, sc=8) 00:29:51.851 [2024-11-19 21:19:25.268856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:51.851 starting I/O failed: -6 00:29:51.852 starting I/O failed: -6 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 [2024-11-19 21:19:25.271099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 [2024-11-19 21:19:25.273781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.852 starting I/O failed: -6 00:29:51.852 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 starting I/O failed: -6 00:29:51.853 [2024-11-19 21:19:25.286250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:51.853 NVMe io qpair process completion error 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Write completed with error (sct=0, sc=8) 00:29:51.853 Initializing NVMe Controllers 00:29:51.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:51.853 Controller IO queue size 128, less than required. 00:29:51.853 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:51.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:51.853 Controller IO queue size 128, less than required. 00:29:51.853 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:51.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:51.853 Controller IO queue size 128, less than required. 00:29:51.853 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:51.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:51.853 Controller IO queue size 128, less than required. 00:29:51.853 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:51.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:51.853 Controller IO queue size 128, less than required. 00:29:51.853 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:51.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:51.853 Controller IO queue size 128, less than required. 00:29:51.853 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:51.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:51.853 Controller IO queue size 128, less than required. 00:29:51.853 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:51.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:51.853 Controller IO queue size 128, less than required. 00:29:51.853 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:51.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:51.853 Controller IO queue size 128, less than required. 00:29:51.853 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:51.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:51.853 Controller IO queue size 128, less than required. 00:29:51.853 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:51.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:51.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:51.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:51.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:51.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:51.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:51.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:51.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:51.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:51.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:51.853 Initialization complete. Launching workers. 00:29:51.853 ======================================================== 00:29:51.853 Latency(us) 00:29:51.853 Device Information : IOPS MiB/s Average min max 00:29:51.853 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1464.75 62.94 87419.40 2041.90 194983.95 00:29:51.853 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1442.74 61.99 88883.87 2223.19 238540.86 00:29:51.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1451.63 62.37 87025.92 1468.86 250825.85 00:29:51.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1474.69 63.37 87145.64 1611.34 228197.72 00:29:51.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1463.90 62.90 86304.11 1585.16 242083.68 00:29:51.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1432.59 61.56 86717.01 1574.78 164486.27 00:29:51.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1400.64 60.18 88836.97 2159.51 161591.14 00:29:51.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1425.39 61.25 87442.76 1675.82 175126.66 00:29:51.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1447.82 62.21 86280.31 1249.47 159095.99 00:29:51.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1457.97 62.65 85864.81 1675.78 173122.52 00:29:51.854 ======================================================== 00:29:51.854 Total : 14462.11 621.42 87184.45 1249.47 250825.85 00:29:51.854 00:29:51.854 [2024-11-19 21:19:25.330750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:29:51.854 [2024-11-19 21:19:25.330892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016d80 is same with the state(6) to be set 00:29:51.854 [2024-11-19 21:19:25.330978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015e80 is same with the state(6) to be set 00:29:51.854 [2024-11-19 21:19:25.331061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017c80 is same with the state(6) to be set 00:29:51.854 [2024-11-19 21:19:25.331155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:29:51.854 [2024-11-19 21:19:25.331238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015980 is same with the state(6) to be set 00:29:51.854 [2024-11-19 21:19:25.331320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017780 is same with the state(6) to be set 00:29:51.854 [2024-11-19 21:19:25.331400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017280 is same with the state(6) to be set 00:29:51.854 [2024-11-19 21:19:25.331489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000018180 is same with the state(6) to be set 00:29:51.854 [2024-11-19 21:19:25.331578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000018680 is same with the state(6) to be set 00:29:51.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:54.422 21:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3087943 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3087943 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3087943 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:55.362 rmmod nvme_tcp 00:29:55.362 rmmod nvme_fabrics 00:29:55.362 rmmod nvme_keyring 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3087630 ']' 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3087630 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3087630 ']' 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3087630 00:29:55.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3087630) - No such process 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3087630 is not found' 00:29:55.362 Process with pid 3087630 is not found 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:55.362 21:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.298 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:57.298 00:29:57.298 real 0m13.374s 00:29:57.298 user 0m36.291s 00:29:57.298 sys 0m5.412s 00:29:57.298 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:57.298 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:57.298 ************************************ 00:29:57.298 END TEST nvmf_shutdown_tc4 00:29:57.298 ************************************ 00:29:57.298 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:57.298 00:29:57.298 real 0m55.419s 00:29:57.298 user 2m50.390s 00:29:57.298 sys 0m13.309s 00:29:57.298 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:57.298 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:57.298 ************************************ 00:29:57.298 END TEST nvmf_shutdown 00:29:57.298 ************************************ 00:29:57.298 21:19:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:57.298 21:19:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:57.298 21:19:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:57.298 21:19:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:57.557 ************************************ 00:29:57.557 START TEST nvmf_nsid 00:29:57.557 ************************************ 00:29:57.557 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:57.557 * Looking for test storage... 00:29:57.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:57.557 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:57.557 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:29:57.557 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:57.557 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:57.557 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:57.557 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:57.557 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:57.557 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:57.557 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:57.557 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:57.557 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:57.557 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:57.557 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:57.557 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:57.557 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:57.557 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:57.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.558 --rc genhtml_branch_coverage=1 00:29:57.558 --rc genhtml_function_coverage=1 00:29:57.558 --rc genhtml_legend=1 00:29:57.558 --rc geninfo_all_blocks=1 00:29:57.558 --rc geninfo_unexecuted_blocks=1 00:29:57.558 00:29:57.558 ' 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:57.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.558 --rc genhtml_branch_coverage=1 00:29:57.558 --rc genhtml_function_coverage=1 00:29:57.558 --rc genhtml_legend=1 00:29:57.558 --rc geninfo_all_blocks=1 00:29:57.558 --rc geninfo_unexecuted_blocks=1 00:29:57.558 00:29:57.558 ' 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:57.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.558 --rc genhtml_branch_coverage=1 00:29:57.558 --rc genhtml_function_coverage=1 00:29:57.558 --rc genhtml_legend=1 00:29:57.558 --rc geninfo_all_blocks=1 00:29:57.558 --rc geninfo_unexecuted_blocks=1 00:29:57.558 00:29:57.558 ' 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:57.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.558 --rc genhtml_branch_coverage=1 00:29:57.558 --rc genhtml_function_coverage=1 00:29:57.558 --rc genhtml_legend=1 00:29:57.558 --rc geninfo_all_blocks=1 00:29:57.558 --rc geninfo_unexecuted_blocks=1 00:29:57.558 00:29:57.558 ' 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:57.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:57.558 21:19:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:59.463 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:59.463 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:59.463 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:59.463 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.463 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.464 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:59.464 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:59.464 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.464 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.464 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:59.464 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:59.464 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.464 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.722 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.722 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.722 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:59.722 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.722 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.722 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.722 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:59.722 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:59.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:29:59.722 00:29:59.722 --- 10.0.0.2 ping statistics --- 00:29:59.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.722 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:29:59.722 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:29:59.722 00:29:59.722 --- 10.0.0.1 ping statistics --- 00:29:59.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.722 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:29:59.722 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.722 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:59.722 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:59.722 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.722 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:59.722 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:59.722 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.722 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:59.722 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:59.722 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:59.722 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:59.722 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:59.723 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:59.723 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3090834 00:29:59.723 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:59.723 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3090834 00:29:59.723 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3090834 ']' 00:29:59.723 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.723 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.723 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.723 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.723 21:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:59.723 [2024-11-19 21:19:33.466522] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:29:59.723 [2024-11-19 21:19:33.466655] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.981 [2024-11-19 21:19:33.616688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.981 [2024-11-19 21:19:33.753012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.981 [2024-11-19 21:19:33.753113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.981 [2024-11-19 21:19:33.753140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.981 [2024-11-19 21:19:33.753165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.981 [2024-11-19 21:19:33.753185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.981 [2024-11-19 21:19:33.754846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3090984 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=2e19c31e-243d-45ba-933c-a4033327b5fe 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=d9ea48b4-460c-4ef5-ab1e-b3962a60a5f8 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=dd9b97c2-01a2-4879-8eca-bc20b52d21ac 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:00.917 null0 00:30:00.917 null1 00:30:00.917 null2 00:30:00.917 [2024-11-19 21:19:34.552036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.917 [2024-11-19 21:19:34.576373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3090984 /var/tmp/tgt2.sock 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3090984 ']' 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:30:00.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:00.917 21:19:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:00.917 [2024-11-19 21:19:34.615128] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:30:00.917 [2024-11-19 21:19:34.615267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090984 ] 00:30:01.175 [2024-11-19 21:19:34.755617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.175 [2024-11-19 21:19:34.877900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.109 21:19:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:02.109 21:19:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:02.109 21:19:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:30:02.366 [2024-11-19 21:19:36.155671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:02.625 [2024-11-19 21:19:36.172005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:30:02.625 nvme0n1 nvme0n2 00:30:02.625 nvme1n1 00:30:02.625 21:19:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:30:02.625 21:19:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:30:02.625 21:19:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:03.191 21:19:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:30:03.191 21:19:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:30:03.191 21:19:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:30:03.191 21:19:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:30:03.191 21:19:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:30:03.191 21:19:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:30:03.191 21:19:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:30:03.191 21:19:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:03.191 21:19:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:03.191 21:19:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:03.191 21:19:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:30:03.191 21:19:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:30:03.191 21:19:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 2e19c31e-243d-45ba-933c-a4033327b5fe 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2e19c31e243d45ba933ca4033327b5fe 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2E19C31E243D45BA933CA4033327B5FE 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 2E19C31E243D45BA933CA4033327B5FE == \2\E\1\9\C\3\1\E\2\4\3\D\4\5\B\A\9\3\3\C\A\4\0\3\3\3\2\7\B\5\F\E ]] 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid d9ea48b4-460c-4ef5-ab1e-b3962a60a5f8 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:30:04.125 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:04.383 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d9ea48b4460c4ef5ab1eb3962a60a5f8 00:30:04.383 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D9EA48B4460C4EF5AB1EB3962A60A5F8 00:30:04.383 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ D9EA48B4460C4EF5AB1EB3962A60A5F8 == \D\9\E\A\4\8\B\4\4\6\0\C\4\E\F\5\A\B\1\E\B\3\9\6\2\A\6\0\A\5\F\8 ]] 00:30:04.383 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:30:04.383 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:04.383 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:04.383 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:30:04.383 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:04.383 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:30:04.383 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:04.383 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid dd9b97c2-01a2-4879-8eca-bc20b52d21ac 00:30:04.383 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:04.383 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:30:04.384 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:30:04.384 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:30:04.384 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:04.384 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=dd9b97c201a248798ecabc20b52d21ac 00:30:04.384 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DD9B97C201A248798ECABC20B52D21AC 00:30:04.384 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ DD9B97C201A248798ECABC20B52D21AC == \D\D\9\B\9\7\C\2\0\1\A\2\4\8\7\9\8\E\C\A\B\C\2\0\B\5\2\D\2\1\A\C ]] 00:30:04.384 21:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:30:04.641 21:19:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:30:04.641 21:19:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:30:04.641 21:19:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3090984 00:30:04.641 21:19:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3090984 ']' 00:30:04.641 21:19:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3090984 00:30:04.642 21:19:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:04.642 21:19:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:04.642 21:19:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3090984 00:30:04.642 21:19:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:04.642 21:19:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:04.642 21:19:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3090984' 00:30:04.642 killing process with pid 3090984 00:30:04.642 21:19:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3090984 00:30:04.642 21:19:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3090984 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:07.171 rmmod nvme_tcp 00:30:07.171 rmmod nvme_fabrics 00:30:07.171 rmmod nvme_keyring 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3090834 ']' 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3090834 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3090834 ']' 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3090834 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3090834 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3090834' 00:30:07.171 killing process with pid 3090834 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3090834 00:30:07.171 21:19:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3090834 00:30:08.104 21:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:08.104 21:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:08.104 21:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:08.104 21:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:30:08.104 21:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:30:08.104 21:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:08.104 21:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:30:08.104 21:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:08.104 21:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:08.104 21:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.104 21:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.104 21:19:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.636 21:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:10.636 00:30:10.636 real 0m12.718s 00:30:10.636 user 0m15.474s 00:30:10.636 sys 0m3.077s 00:30:10.636 21:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:10.636 21:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:10.636 ************************************ 00:30:10.636 END TEST nvmf_nsid 00:30:10.636 ************************************ 00:30:10.636 21:19:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:10.636 00:30:10.636 real 18m37.138s 00:30:10.636 user 51m17.368s 00:30:10.636 sys 3m32.677s 00:30:10.636 21:19:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:10.636 21:19:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:10.636 ************************************ 00:30:10.636 END TEST nvmf_target_extra 00:30:10.636 ************************************ 00:30:10.636 21:19:43 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:10.636 21:19:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:10.636 21:19:43 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:10.636 21:19:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:10.636 ************************************ 00:30:10.636 START TEST nvmf_host 00:30:10.636 ************************************ 00:30:10.636 21:19:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:10.636 * Looking for test storage... 00:30:10.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:10.636 21:19:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:10.636 21:19:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:30:10.636 21:19:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:10.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.636 --rc genhtml_branch_coverage=1 00:30:10.636 --rc genhtml_function_coverage=1 00:30:10.636 --rc genhtml_legend=1 00:30:10.636 --rc geninfo_all_blocks=1 00:30:10.636 --rc geninfo_unexecuted_blocks=1 00:30:10.636 00:30:10.636 ' 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:10.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.636 --rc genhtml_branch_coverage=1 00:30:10.636 --rc genhtml_function_coverage=1 00:30:10.636 --rc genhtml_legend=1 00:30:10.636 --rc geninfo_all_blocks=1 00:30:10.636 --rc geninfo_unexecuted_blocks=1 00:30:10.636 00:30:10.636 ' 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:10.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.636 --rc genhtml_branch_coverage=1 00:30:10.636 --rc genhtml_function_coverage=1 00:30:10.636 --rc genhtml_legend=1 00:30:10.636 --rc geninfo_all_blocks=1 00:30:10.636 --rc geninfo_unexecuted_blocks=1 00:30:10.636 00:30:10.636 ' 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:10.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.636 --rc genhtml_branch_coverage=1 00:30:10.636 --rc genhtml_function_coverage=1 00:30:10.636 --rc genhtml_legend=1 00:30:10.636 --rc geninfo_all_blocks=1 00:30:10.636 --rc geninfo_unexecuted_blocks=1 00:30:10.636 00:30:10.636 ' 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:10.636 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:10.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.637 ************************************ 00:30:10.637 START TEST nvmf_multicontroller 00:30:10.637 ************************************ 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:10.637 * Looking for test storage... 00:30:10.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:10.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.637 --rc genhtml_branch_coverage=1 00:30:10.637 --rc genhtml_function_coverage=1 00:30:10.637 --rc genhtml_legend=1 00:30:10.637 --rc geninfo_all_blocks=1 00:30:10.637 --rc geninfo_unexecuted_blocks=1 00:30:10.637 00:30:10.637 ' 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:10.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.637 --rc genhtml_branch_coverage=1 00:30:10.637 --rc genhtml_function_coverage=1 00:30:10.637 --rc genhtml_legend=1 00:30:10.637 --rc geninfo_all_blocks=1 00:30:10.637 --rc geninfo_unexecuted_blocks=1 00:30:10.637 00:30:10.637 ' 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:10.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.637 --rc genhtml_branch_coverage=1 00:30:10.637 --rc genhtml_function_coverage=1 00:30:10.637 --rc genhtml_legend=1 00:30:10.637 --rc geninfo_all_blocks=1 00:30:10.637 --rc geninfo_unexecuted_blocks=1 00:30:10.637 00:30:10.637 ' 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:10.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.637 --rc genhtml_branch_coverage=1 00:30:10.637 --rc genhtml_function_coverage=1 00:30:10.637 --rc genhtml_legend=1 00:30:10.637 --rc geninfo_all_blocks=1 00:30:10.637 --rc geninfo_unexecuted_blocks=1 00:30:10.637 00:30:10.637 ' 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:10.637 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:10.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:30:10.638 21:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:12.561 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:12.561 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:30:12.561 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:12.561 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:12.561 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:12.561 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:12.561 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:12.561 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:30:12.561 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:12.561 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:30:12.561 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:12.562 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:12.562 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:12.562 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:12.562 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:12.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:12.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:30:12.562 00:30:12.562 --- 10.0.0.2 ping statistics --- 00:30:12.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.562 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:12.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:12.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:30:12.562 00:30:12.562 --- 10.0.0.1 ping statistics --- 00:30:12.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.562 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:12.562 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:12.821 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:12.821 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:12.821 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:12.821 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:12.821 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3093938 00:30:12.821 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:12.821 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3093938 00:30:12.821 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3093938 ']' 00:30:12.821 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.821 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:12.821 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.821 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:12.821 21:19:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:12.821 [2024-11-19 21:19:46.461826] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:30:12.821 [2024-11-19 21:19:46.461965] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.080 [2024-11-19 21:19:46.627242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:13.080 [2024-11-19 21:19:46.769860] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.080 [2024-11-19 21:19:46.769944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.080 [2024-11-19 21:19:46.769971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.080 [2024-11-19 21:19:46.769994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.080 [2024-11-19 21:19:46.770014] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.080 [2024-11-19 21:19:46.772736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:13.080 [2024-11-19 21:19:46.772828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.080 [2024-11-19 21:19:46.772833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:14.015 [2024-11-19 21:19:47.495605] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:14.015 Malloc0 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.015 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:14.016 [2024-11-19 21:19:47.619929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:14.016 [2024-11-19 21:19:47.627768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:14.016 Malloc1 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3094093 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3094093 /var/tmp/bdevperf.sock 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3094093 ']' 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:14.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:14.016 21:19:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.389 21:19:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:15.389 21:19:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:15.389 21:19:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:15.389 21:19:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.389 21:19:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.389 NVMe0n1 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.389 1 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.389 request: 00:30:15.389 { 00:30:15.389 "name": "NVMe0", 00:30:15.389 "trtype": "tcp", 00:30:15.389 "traddr": "10.0.0.2", 00:30:15.389 "adrfam": "ipv4", 00:30:15.389 "trsvcid": "4420", 00:30:15.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:15.389 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:15.389 "hostaddr": "10.0.0.1", 00:30:15.389 "prchk_reftag": false, 00:30:15.389 "prchk_guard": false, 00:30:15.389 "hdgst": false, 00:30:15.389 "ddgst": false, 00:30:15.389 "allow_unrecognized_csi": false, 00:30:15.389 "method": "bdev_nvme_attach_controller", 00:30:15.389 "req_id": 1 00:30:15.389 } 00:30:15.389 Got JSON-RPC error response 00:30:15.389 response: 00:30:15.389 { 00:30:15.389 "code": -114, 00:30:15.389 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:15.389 } 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.389 request: 00:30:15.389 { 00:30:15.389 "name": "NVMe0", 00:30:15.389 "trtype": "tcp", 00:30:15.389 "traddr": "10.0.0.2", 00:30:15.389 "adrfam": "ipv4", 00:30:15.389 "trsvcid": "4420", 00:30:15.389 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:15.389 "hostaddr": "10.0.0.1", 00:30:15.389 "prchk_reftag": false, 00:30:15.389 "prchk_guard": false, 00:30:15.389 "hdgst": false, 00:30:15.389 "ddgst": false, 00:30:15.389 "allow_unrecognized_csi": false, 00:30:15.389 "method": "bdev_nvme_attach_controller", 00:30:15.389 "req_id": 1 00:30:15.389 } 00:30:15.389 Got JSON-RPC error response 00:30:15.389 response: 00:30:15.389 { 00:30:15.389 "code": -114, 00:30:15.389 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:15.389 } 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.389 request: 00:30:15.389 { 00:30:15.389 "name": "NVMe0", 00:30:15.389 "trtype": "tcp", 00:30:15.389 "traddr": "10.0.0.2", 00:30:15.389 "adrfam": "ipv4", 00:30:15.389 "trsvcid": "4420", 00:30:15.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:15.389 "hostaddr": "10.0.0.1", 00:30:15.389 "prchk_reftag": false, 00:30:15.389 "prchk_guard": false, 00:30:15.389 "hdgst": false, 00:30:15.389 "ddgst": false, 00:30:15.389 "multipath": "disable", 00:30:15.389 "allow_unrecognized_csi": false, 00:30:15.389 "method": "bdev_nvme_attach_controller", 00:30:15.389 "req_id": 1 00:30:15.389 } 00:30:15.389 Got JSON-RPC error response 00:30:15.389 response: 00:30:15.389 { 00:30:15.389 "code": -114, 00:30:15.389 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:30:15.389 } 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:15.389 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:15.390 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:15.390 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:15.390 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.390 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.390 request: 00:30:15.390 { 00:30:15.390 "name": "NVMe0", 00:30:15.390 "trtype": "tcp", 00:30:15.390 "traddr": "10.0.0.2", 00:30:15.390 "adrfam": "ipv4", 00:30:15.390 "trsvcid": "4420", 00:30:15.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:15.390 "hostaddr": "10.0.0.1", 00:30:15.390 "prchk_reftag": false, 00:30:15.390 "prchk_guard": false, 00:30:15.390 "hdgst": false, 00:30:15.390 "ddgst": false, 00:30:15.390 "multipath": "failover", 00:30:15.390 "allow_unrecognized_csi": false, 00:30:15.390 "method": "bdev_nvme_attach_controller", 00:30:15.390 "req_id": 1 00:30:15.390 } 00:30:15.390 Got JSON-RPC error response 00:30:15.390 response: 00:30:15.390 { 00:30:15.390 "code": -114, 00:30:15.390 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:15.390 } 00:30:15.390 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:15.390 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:15.390 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:15.390 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:15.390 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:15.390 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:15.390 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.390 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.646 NVMe0n1 00:30:15.646 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.646 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:15.646 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.646 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.646 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.646 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:15.646 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.646 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.903 00:30:15.903 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.903 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:15.903 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.903 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:15.903 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.903 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.903 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:15.903 21:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:17.276 { 00:30:17.276 "results": [ 00:30:17.276 { 00:30:17.276 "job": "NVMe0n1", 00:30:17.276 "core_mask": "0x1", 00:30:17.276 "workload": "write", 00:30:17.276 "status": "finished", 00:30:17.276 "queue_depth": 128, 00:30:17.276 "io_size": 4096, 00:30:17.276 "runtime": 1.01489, 00:30:17.276 "iops": 12891.052232261625, 00:30:17.276 "mibps": 50.355672782271974, 00:30:17.276 "io_failed": 0, 00:30:17.276 "io_timeout": 0, 00:30:17.276 "avg_latency_us": 9911.058179543144, 00:30:17.276 "min_latency_us": 9175.04, 00:30:17.276 "max_latency_us": 20971.52 00:30:17.276 } 00:30:17.276 ], 00:30:17.276 "core_count": 1 00:30:17.276 } 00:30:17.276 21:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:17.276 21:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.276 21:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.276 21:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.276 21:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:30:17.276 21:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3094093 00:30:17.276 21:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3094093 ']' 00:30:17.276 21:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3094093 00:30:17.276 21:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:17.276 21:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:17.276 21:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3094093 00:30:17.276 21:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:17.276 21:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:17.276 21:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3094093' 00:30:17.276 killing process with pid 3094093 00:30:17.276 21:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3094093 00:30:17.276 21:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3094093 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:30:17.842 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:17.842 [2024-11-19 21:19:47.828388] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:30:17.842 [2024-11-19 21:19:47.828561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3094093 ] 00:30:17.842 [2024-11-19 21:19:47.971342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.842 [2024-11-19 21:19:48.099880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.842 [2024-11-19 21:19:49.487628] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name 44d3d5a2-754c-4ba7-a47d-c0d19130eb89 already exists 00:30:17.842 [2024-11-19 21:19:49.487698] bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:44d3d5a2-754c-4ba7-a47d-c0d19130eb89 alias for bdev NVMe1n1 00:30:17.842 [2024-11-19 21:19:49.487748] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:17.842 Running I/O for 1 seconds... 00:30:17.842 12828.00 IOPS, 50.11 MiB/s 00:30:17.842 Latency(us) 00:30:17.842 [2024-11-19T20:19:51.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.842 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:17.842 NVMe0n1 : 1.01 12891.05 50.36 0.00 0.00 9911.06 9175.04 20971.52 00:30:17.842 [2024-11-19T20:19:51.637Z] =================================================================================================================== 00:30:17.842 [2024-11-19T20:19:51.637Z] Total : 12891.05 50.36 0.00 0.00 9911.06 9175.04 20971.52 00:30:17.842 Received shutdown signal, test time was about 1.000000 seconds 00:30:17.842 00:30:17.842 Latency(us) 00:30:17.842 [2024-11-19T20:19:51.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.842 [2024-11-19T20:19:51.637Z] =================================================================================================================== 00:30:17.842 [2024-11-19T20:19:51.637Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:17.842 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:30:17.842 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:17.843 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:17.843 rmmod nvme_tcp 00:30:17.843 rmmod nvme_fabrics 00:30:17.843 rmmod nvme_keyring 00:30:18.101 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:18.101 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:30:18.101 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:30:18.101 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3093938 ']' 00:30:18.101 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3093938 00:30:18.101 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3093938 ']' 00:30:18.101 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3093938 00:30:18.101 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:18.101 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:18.101 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3093938 00:30:18.101 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:18.101 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:18.101 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3093938' 00:30:18.101 killing process with pid 3093938 00:30:18.101 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3093938 00:30:18.101 21:19:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3093938 00:30:19.474 21:19:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:19.474 21:19:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:19.474 21:19:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:19.474 21:19:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:30:19.474 21:19:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:30:19.474 21:19:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:19.474 21:19:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:30:19.474 21:19:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:19.474 21:19:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:19.474 21:19:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:19.474 21:19:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:19.474 21:19:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.374 21:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:21.374 00:30:21.374 real 0m11.059s 00:30:21.374 user 0m23.160s 00:30:21.374 sys 0m2.717s 00:30:21.374 21:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:21.374 21:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.374 ************************************ 00:30:21.374 END TEST nvmf_multicontroller 00:30:21.374 ************************************ 00:30:21.374 21:19:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:21.374 21:19:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:21.374 21:19:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:21.374 21:19:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.374 ************************************ 00:30:21.374 START TEST nvmf_aer 00:30:21.374 ************************************ 00:30:21.374 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:21.633 * Looking for test storage... 00:30:21.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:21.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.633 --rc genhtml_branch_coverage=1 00:30:21.633 --rc genhtml_function_coverage=1 00:30:21.633 --rc genhtml_legend=1 00:30:21.633 --rc geninfo_all_blocks=1 00:30:21.633 --rc geninfo_unexecuted_blocks=1 00:30:21.633 00:30:21.633 ' 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:21.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.633 --rc genhtml_branch_coverage=1 00:30:21.633 --rc genhtml_function_coverage=1 00:30:21.633 --rc genhtml_legend=1 00:30:21.633 --rc geninfo_all_blocks=1 00:30:21.633 --rc geninfo_unexecuted_blocks=1 00:30:21.633 00:30:21.633 ' 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:21.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.633 --rc genhtml_branch_coverage=1 00:30:21.633 --rc genhtml_function_coverage=1 00:30:21.633 --rc genhtml_legend=1 00:30:21.633 --rc geninfo_all_blocks=1 00:30:21.633 --rc geninfo_unexecuted_blocks=1 00:30:21.633 00:30:21.633 ' 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:21.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.633 --rc genhtml_branch_coverage=1 00:30:21.633 --rc genhtml_function_coverage=1 00:30:21.633 --rc genhtml_legend=1 00:30:21.633 --rc geninfo_all_blocks=1 00:30:21.633 --rc geninfo_unexecuted_blocks=1 00:30:21.633 00:30:21.633 ' 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:21.633 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:21.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:21.634 21:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:23.535 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:23.535 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:23.535 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:23.535 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:23.535 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:23.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:23.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:30:23.794 00:30:23.794 --- 10.0.0.2 ping statistics --- 00:30:23.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.794 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:23.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:23.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:30:23.794 00:30:23.794 --- 10.0.0.1 ping statistics --- 00:30:23.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.794 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3096592 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3096592 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3096592 ']' 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:23.794 21:19:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:23.794 [2024-11-19 21:19:57.562742] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:30:23.794 [2024-11-19 21:19:57.562888] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:24.052 [2024-11-19 21:19:57.718221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:24.311 [2024-11-19 21:19:57.861830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:24.311 [2024-11-19 21:19:57.861892] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:24.311 [2024-11-19 21:19:57.861913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:24.311 [2024-11-19 21:19:57.861933] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:24.311 [2024-11-19 21:19:57.861950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:24.311 [2024-11-19 21:19:57.864700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.311 [2024-11-19 21:19:57.864772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:24.311 [2024-11-19 21:19:57.864870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.311 [2024-11-19 21:19:57.864876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:24.876 [2024-11-19 21:19:58.528410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:24.876 Malloc0 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:24.876 [2024-11-19 21:19:58.649894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:24.876 [ 00:30:24.876 { 00:30:24.876 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:24.876 "subtype": "Discovery", 00:30:24.876 "listen_addresses": [], 00:30:24.876 "allow_any_host": true, 00:30:24.876 "hosts": [] 00:30:24.876 }, 00:30:24.876 { 00:30:24.876 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:24.876 "subtype": "NVMe", 00:30:24.876 "listen_addresses": [ 00:30:24.876 { 00:30:24.876 "trtype": "TCP", 00:30:24.876 "adrfam": "IPv4", 00:30:24.876 "traddr": "10.0.0.2", 00:30:24.876 "trsvcid": "4420" 00:30:24.876 } 00:30:24.876 ], 00:30:24.876 "allow_any_host": true, 00:30:24.876 "hosts": [], 00:30:24.876 "serial_number": "SPDK00000000000001", 00:30:24.876 "model_number": "SPDK bdev Controller", 00:30:24.876 "max_namespaces": 2, 00:30:24.876 "min_cntlid": 1, 00:30:24.876 "max_cntlid": 65519, 00:30:24.876 "namespaces": [ 00:30:24.876 { 00:30:24.876 "nsid": 1, 00:30:24.876 "bdev_name": "Malloc0", 00:30:24.876 "name": "Malloc0", 00:30:24.876 "nguid": "BC6E50AD719F4A90A104D8995AE97396", 00:30:24.876 "uuid": "bc6e50ad-719f-4a90-a104-d8995ae97396" 00:30:24.876 } 00:30:24.876 ] 00:30:24.876 } 00:30:24.876 ] 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3096807 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:30:24.876 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:25.134 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:25.134 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:30:25.134 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:30:25.134 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:25.134 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:25.134 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:30:25.134 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:30:25.134 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:25.392 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:25.392 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 3 -lt 200 ']' 00:30:25.392 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=4 00:30:25.392 21:19:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:25.392 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:25.392 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:25.392 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:30:25.392 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:25.392 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.392 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:25.649 Malloc1 00:30:25.649 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.649 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:25.649 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.649 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:25.649 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.649 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:25.649 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.649 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:25.649 [ 00:30:25.649 { 00:30:25.649 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:25.649 "subtype": "Discovery", 00:30:25.649 "listen_addresses": [], 00:30:25.649 "allow_any_host": true, 00:30:25.649 "hosts": [] 00:30:25.649 }, 00:30:25.649 { 00:30:25.649 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:25.649 "subtype": "NVMe", 00:30:25.649 "listen_addresses": [ 00:30:25.649 { 00:30:25.649 "trtype": "TCP", 00:30:25.649 "adrfam": "IPv4", 00:30:25.649 "traddr": "10.0.0.2", 00:30:25.649 "trsvcid": "4420" 00:30:25.649 } 00:30:25.649 ], 00:30:25.649 "allow_any_host": true, 00:30:25.649 "hosts": [], 00:30:25.649 "serial_number": "SPDK00000000000001", 00:30:25.649 "model_number": "SPDK bdev Controller", 00:30:25.649 "max_namespaces": 2, 00:30:25.649 "min_cntlid": 1, 00:30:25.649 "max_cntlid": 65519, 00:30:25.649 "namespaces": [ 00:30:25.649 { 00:30:25.649 "nsid": 1, 00:30:25.649 "bdev_name": "Malloc0", 00:30:25.649 "name": "Malloc0", 00:30:25.649 "nguid": "BC6E50AD719F4A90A104D8995AE97396", 00:30:25.649 "uuid": "bc6e50ad-719f-4a90-a104-d8995ae97396" 00:30:25.649 }, 00:30:25.649 { 00:30:25.649 "nsid": 2, 00:30:25.649 "bdev_name": "Malloc1", 00:30:25.649 "name": "Malloc1", 00:30:25.649 "nguid": "ECDCEFAE2EAC489CA1F8EED0879B05D2", 00:30:25.649 "uuid": "ecdcefae-2eac-489c-a1f8-eed0879b05d2" 00:30:25.649 } 00:30:25.649 ] 00:30:25.649 } 00:30:25.649 ] 00:30:25.649 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.649 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3096807 00:30:25.649 Asynchronous Event Request test 00:30:25.649 Attaching to 10.0.0.2 00:30:25.649 Attached to 10.0.0.2 00:30:25.649 Registering asynchronous event callbacks... 00:30:25.649 Starting namespace attribute notice tests for all controllers... 00:30:25.649 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:25.649 aer_cb - Changed Namespace 00:30:25.649 Cleaning up... 00:30:25.649 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:25.649 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.649 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:25.907 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.907 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:25.907 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.907 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:25.907 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.907 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:25.907 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.907 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:25.907 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.907 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:25.907 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:25.907 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:25.907 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:25.907 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:25.907 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:25.907 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:25.907 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:25.907 rmmod nvme_tcp 00:30:25.907 rmmod nvme_fabrics 00:30:25.907 rmmod nvme_keyring 00:30:26.165 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:26.165 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:26.165 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:26.165 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3096592 ']' 00:30:26.165 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3096592 00:30:26.165 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3096592 ']' 00:30:26.165 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3096592 00:30:26.165 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:30:26.165 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:26.165 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3096592 00:30:26.165 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:26.165 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:26.165 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3096592' 00:30:26.165 killing process with pid 3096592 00:30:26.165 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3096592 00:30:26.165 21:19:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3096592 00:30:27.097 21:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:27.097 21:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:27.097 21:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:27.097 21:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:30:27.097 21:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:30:27.097 21:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:27.097 21:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:30:27.097 21:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:27.097 21:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:27.097 21:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.097 21:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.097 21:20:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.631 21:20:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:29.631 00:30:29.631 real 0m7.752s 00:30:29.631 user 0m11.722s 00:30:29.631 sys 0m2.279s 00:30:29.631 21:20:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:29.631 21:20:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.631 ************************************ 00:30:29.631 END TEST nvmf_aer 00:30:29.631 ************************************ 00:30:29.631 21:20:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:29.631 21:20:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:29.631 21:20:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:29.631 21:20:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.631 ************************************ 00:30:29.631 START TEST nvmf_async_init 00:30:29.631 ************************************ 00:30:29.631 21:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:29.631 * Looking for test storage... 00:30:29.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:29.631 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:29.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.632 --rc genhtml_branch_coverage=1 00:30:29.632 --rc genhtml_function_coverage=1 00:30:29.632 --rc genhtml_legend=1 00:30:29.632 --rc geninfo_all_blocks=1 00:30:29.632 --rc geninfo_unexecuted_blocks=1 00:30:29.632 00:30:29.632 ' 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:29.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.632 --rc genhtml_branch_coverage=1 00:30:29.632 --rc genhtml_function_coverage=1 00:30:29.632 --rc genhtml_legend=1 00:30:29.632 --rc geninfo_all_blocks=1 00:30:29.632 --rc geninfo_unexecuted_blocks=1 00:30:29.632 00:30:29.632 ' 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:29.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.632 --rc genhtml_branch_coverage=1 00:30:29.632 --rc genhtml_function_coverage=1 00:30:29.632 --rc genhtml_legend=1 00:30:29.632 --rc geninfo_all_blocks=1 00:30:29.632 --rc geninfo_unexecuted_blocks=1 00:30:29.632 00:30:29.632 ' 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:29.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.632 --rc genhtml_branch_coverage=1 00:30:29.632 --rc genhtml_function_coverage=1 00:30:29.632 --rc genhtml_legend=1 00:30:29.632 --rc geninfo_all_blocks=1 00:30:29.632 --rc geninfo_unexecuted_blocks=1 00:30:29.632 00:30:29.632 ' 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:29.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e56be7e4602a4e6a84e5eada52d9895d 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:29.632 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:29.633 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:29.633 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.633 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:29.633 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.633 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:29.633 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:29.633 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:29.633 21:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.630 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:31.630 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:30:31.630 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:31.630 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:31.630 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:31.630 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:31.630 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:31.630 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:30:31.630 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:31.631 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:31.631 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:31.631 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:31.631 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:31.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:30:31.631 00:30:31.631 --- 10.0.0.2 ping statistics --- 00:30:31.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.631 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:30:31.631 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:31.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:30:31.631 00:30:31.631 --- 10.0.0.1 ping statistics --- 00:30:31.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.632 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3098943 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3098943 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3098943 ']' 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:31.632 21:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.632 [2024-11-19 21:20:05.354146] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:30:31.632 [2024-11-19 21:20:05.354291] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:31.889 [2024-11-19 21:20:05.510958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.889 [2024-11-19 21:20:05.648668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:31.889 [2024-11-19 21:20:05.648771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:31.889 [2024-11-19 21:20:05.648796] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:31.889 [2024-11-19 21:20:05.648821] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:31.889 [2024-11-19 21:20:05.648840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:31.889 [2024-11-19 21:20:05.650493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:32.821 [2024-11-19 21:20:06.344697] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:32.821 null0 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e56be7e4602a4e6a84e5eada52d9895d 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:32.821 [2024-11-19 21:20:06.385001] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.821 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:33.079 nvme0n1 00:30:33.079 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.079 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:33.079 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.079 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:33.079 [ 00:30:33.079 { 00:30:33.079 "name": "nvme0n1", 00:30:33.079 "aliases": [ 00:30:33.079 "e56be7e4-602a-4e6a-84e5-eada52d9895d" 00:30:33.079 ], 00:30:33.079 "product_name": "NVMe disk", 00:30:33.079 "block_size": 512, 00:30:33.079 "num_blocks": 2097152, 00:30:33.079 "uuid": "e56be7e4-602a-4e6a-84e5-eada52d9895d", 00:30:33.079 "numa_id": 0, 00:30:33.079 "assigned_rate_limits": { 00:30:33.079 "rw_ios_per_sec": 0, 00:30:33.079 "rw_mbytes_per_sec": 0, 00:30:33.079 "r_mbytes_per_sec": 0, 00:30:33.079 "w_mbytes_per_sec": 0 00:30:33.079 }, 00:30:33.079 "claimed": false, 00:30:33.079 "zoned": false, 00:30:33.079 "supported_io_types": { 00:30:33.079 "read": true, 00:30:33.079 "write": true, 00:30:33.079 "unmap": false, 00:30:33.079 "flush": true, 00:30:33.079 "reset": true, 00:30:33.079 "nvme_admin": true, 00:30:33.079 "nvme_io": true, 00:30:33.079 "nvme_io_md": false, 00:30:33.079 "write_zeroes": true, 00:30:33.079 "zcopy": false, 00:30:33.079 "get_zone_info": false, 00:30:33.079 "zone_management": false, 00:30:33.079 "zone_append": false, 00:30:33.079 "compare": true, 00:30:33.079 "compare_and_write": true, 00:30:33.079 "abort": true, 00:30:33.079 "seek_hole": false, 00:30:33.079 "seek_data": false, 00:30:33.079 "copy": true, 00:30:33.079 "nvme_iov_md": false 00:30:33.079 }, 00:30:33.079 "memory_domains": [ 00:30:33.079 { 00:30:33.079 "dma_device_id": "system", 00:30:33.079 "dma_device_type": 1 00:30:33.079 } 00:30:33.079 ], 00:30:33.079 "driver_specific": { 00:30:33.079 "nvme": [ 00:30:33.079 { 00:30:33.079 "trid": { 00:30:33.079 "trtype": "TCP", 00:30:33.079 "adrfam": "IPv4", 00:30:33.079 "traddr": "10.0.0.2", 00:30:33.079 "trsvcid": "4420", 00:30:33.079 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:33.079 }, 00:30:33.079 "ctrlr_data": { 00:30:33.079 "cntlid": 1, 00:30:33.079 "vendor_id": "0x8086", 00:30:33.079 "model_number": "SPDK bdev Controller", 00:30:33.079 "serial_number": "00000000000000000000", 00:30:33.079 "firmware_revision": "25.01", 00:30:33.079 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:33.079 "oacs": { 00:30:33.079 "security": 0, 00:30:33.079 "format": 0, 00:30:33.079 "firmware": 0, 00:30:33.079 "ns_manage": 0 00:30:33.079 }, 00:30:33.079 "multi_ctrlr": true, 00:30:33.079 "ana_reporting": false 00:30:33.079 }, 00:30:33.079 "vs": { 00:30:33.079 "nvme_version": "1.3" 00:30:33.079 }, 00:30:33.079 "ns_data": { 00:30:33.079 "id": 1, 00:30:33.079 "can_share": true 00:30:33.079 } 00:30:33.079 } 00:30:33.079 ], 00:30:33.079 "mp_policy": "active_passive" 00:30:33.079 } 00:30:33.080 } 00:30:33.080 ] 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:33.080 [2024-11-19 21:20:06.637524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:33.080 [2024-11-19 21:20:06.637666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:30:33.080 [2024-11-19 21:20:06.780277] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:33.080 [ 00:30:33.080 { 00:30:33.080 "name": "nvme0n1", 00:30:33.080 "aliases": [ 00:30:33.080 "e56be7e4-602a-4e6a-84e5-eada52d9895d" 00:30:33.080 ], 00:30:33.080 "product_name": "NVMe disk", 00:30:33.080 "block_size": 512, 00:30:33.080 "num_blocks": 2097152, 00:30:33.080 "uuid": "e56be7e4-602a-4e6a-84e5-eada52d9895d", 00:30:33.080 "numa_id": 0, 00:30:33.080 "assigned_rate_limits": { 00:30:33.080 "rw_ios_per_sec": 0, 00:30:33.080 "rw_mbytes_per_sec": 0, 00:30:33.080 "r_mbytes_per_sec": 0, 00:30:33.080 "w_mbytes_per_sec": 0 00:30:33.080 }, 00:30:33.080 "claimed": false, 00:30:33.080 "zoned": false, 00:30:33.080 "supported_io_types": { 00:30:33.080 "read": true, 00:30:33.080 "write": true, 00:30:33.080 "unmap": false, 00:30:33.080 "flush": true, 00:30:33.080 "reset": true, 00:30:33.080 "nvme_admin": true, 00:30:33.080 "nvme_io": true, 00:30:33.080 "nvme_io_md": false, 00:30:33.080 "write_zeroes": true, 00:30:33.080 "zcopy": false, 00:30:33.080 "get_zone_info": false, 00:30:33.080 "zone_management": false, 00:30:33.080 "zone_append": false, 00:30:33.080 "compare": true, 00:30:33.080 "compare_and_write": true, 00:30:33.080 "abort": true, 00:30:33.080 "seek_hole": false, 00:30:33.080 "seek_data": false, 00:30:33.080 "copy": true, 00:30:33.080 "nvme_iov_md": false 00:30:33.080 }, 00:30:33.080 "memory_domains": [ 00:30:33.080 { 00:30:33.080 "dma_device_id": "system", 00:30:33.080 "dma_device_type": 1 00:30:33.080 } 00:30:33.080 ], 00:30:33.080 "driver_specific": { 00:30:33.080 "nvme": [ 00:30:33.080 { 00:30:33.080 "trid": { 00:30:33.080 "trtype": "TCP", 00:30:33.080 "adrfam": "IPv4", 00:30:33.080 "traddr": "10.0.0.2", 00:30:33.080 "trsvcid": "4420", 00:30:33.080 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:33.080 }, 00:30:33.080 "ctrlr_data": { 00:30:33.080 "cntlid": 2, 00:30:33.080 "vendor_id": "0x8086", 00:30:33.080 "model_number": "SPDK bdev Controller", 00:30:33.080 "serial_number": "00000000000000000000", 00:30:33.080 "firmware_revision": "25.01", 00:30:33.080 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:33.080 "oacs": { 00:30:33.080 "security": 0, 00:30:33.080 "format": 0, 00:30:33.080 "firmware": 0, 00:30:33.080 "ns_manage": 0 00:30:33.080 }, 00:30:33.080 "multi_ctrlr": true, 00:30:33.080 "ana_reporting": false 00:30:33.080 }, 00:30:33.080 "vs": { 00:30:33.080 "nvme_version": "1.3" 00:30:33.080 }, 00:30:33.080 "ns_data": { 00:30:33.080 "id": 1, 00:30:33.080 "can_share": true 00:30:33.080 } 00:30:33.080 } 00:30:33.080 ], 00:30:33.080 "mp_policy": "active_passive" 00:30:33.080 } 00:30:33.080 } 00:30:33.080 ] 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.OIgwzQwspa 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.OIgwzQwspa 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.OIgwzQwspa 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:33.080 [2024-11-19 21:20:06.838289] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:33.080 [2024-11-19 21:20:06.838615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.080 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:33.080 [2024-11-19 21:20:06.854309] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:33.339 nvme0n1 00:30:33.339 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.339 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:33.339 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.339 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:33.339 [ 00:30:33.339 { 00:30:33.339 "name": "nvme0n1", 00:30:33.339 "aliases": [ 00:30:33.339 "e56be7e4-602a-4e6a-84e5-eada52d9895d" 00:30:33.339 ], 00:30:33.339 "product_name": "NVMe disk", 00:30:33.339 "block_size": 512, 00:30:33.339 "num_blocks": 2097152, 00:30:33.339 "uuid": "e56be7e4-602a-4e6a-84e5-eada52d9895d", 00:30:33.339 "numa_id": 0, 00:30:33.339 "assigned_rate_limits": { 00:30:33.339 "rw_ios_per_sec": 0, 00:30:33.339 "rw_mbytes_per_sec": 0, 00:30:33.339 "r_mbytes_per_sec": 0, 00:30:33.339 "w_mbytes_per_sec": 0 00:30:33.339 }, 00:30:33.339 "claimed": false, 00:30:33.339 "zoned": false, 00:30:33.339 "supported_io_types": { 00:30:33.339 "read": true, 00:30:33.339 "write": true, 00:30:33.339 "unmap": false, 00:30:33.339 "flush": true, 00:30:33.339 "reset": true, 00:30:33.339 "nvme_admin": true, 00:30:33.339 "nvme_io": true, 00:30:33.339 "nvme_io_md": false, 00:30:33.339 "write_zeroes": true, 00:30:33.339 "zcopy": false, 00:30:33.339 "get_zone_info": false, 00:30:33.339 "zone_management": false, 00:30:33.339 "zone_append": false, 00:30:33.339 "compare": true, 00:30:33.339 "compare_and_write": true, 00:30:33.339 "abort": true, 00:30:33.339 "seek_hole": false, 00:30:33.339 "seek_data": false, 00:30:33.339 "copy": true, 00:30:33.339 "nvme_iov_md": false 00:30:33.339 }, 00:30:33.339 "memory_domains": [ 00:30:33.339 { 00:30:33.339 "dma_device_id": "system", 00:30:33.339 "dma_device_type": 1 00:30:33.339 } 00:30:33.339 ], 00:30:33.339 "driver_specific": { 00:30:33.339 "nvme": [ 00:30:33.339 { 00:30:33.339 "trid": { 00:30:33.339 "trtype": "TCP", 00:30:33.339 "adrfam": "IPv4", 00:30:33.339 "traddr": "10.0.0.2", 00:30:33.339 "trsvcid": "4421", 00:30:33.339 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:33.339 }, 00:30:33.339 "ctrlr_data": { 00:30:33.339 "cntlid": 3, 00:30:33.339 "vendor_id": "0x8086", 00:30:33.339 "model_number": "SPDK bdev Controller", 00:30:33.339 "serial_number": "00000000000000000000", 00:30:33.339 "firmware_revision": "25.01", 00:30:33.339 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:33.339 "oacs": { 00:30:33.339 "security": 0, 00:30:33.339 "format": 0, 00:30:33.339 "firmware": 0, 00:30:33.339 "ns_manage": 0 00:30:33.339 }, 00:30:33.339 "multi_ctrlr": true, 00:30:33.339 "ana_reporting": false 00:30:33.339 }, 00:30:33.339 "vs": { 00:30:33.339 "nvme_version": "1.3" 00:30:33.339 }, 00:30:33.339 "ns_data": { 00:30:33.339 "id": 1, 00:30:33.339 "can_share": true 00:30:33.339 } 00:30:33.339 } 00:30:33.339 ], 00:30:33.339 "mp_policy": "active_passive" 00:30:33.339 } 00:30:33.339 } 00:30:33.339 ] 00:30:33.339 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.339 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:33.339 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.339 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:33.340 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.340 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.OIgwzQwspa 00:30:33.340 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:30:33.340 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:30:33.340 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:33.340 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:30:33.340 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:33.340 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:30:33.340 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:33.340 21:20:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:33.340 rmmod nvme_tcp 00:30:33.340 rmmod nvme_fabrics 00:30:33.340 rmmod nvme_keyring 00:30:33.340 21:20:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:33.340 21:20:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:30:33.340 21:20:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:30:33.340 21:20:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3098943 ']' 00:30:33.340 21:20:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3098943 00:30:33.340 21:20:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3098943 ']' 00:30:33.340 21:20:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3098943 00:30:33.340 21:20:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:30:33.340 21:20:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:33.340 21:20:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3098943 00:30:33.340 21:20:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:33.340 21:20:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:33.340 21:20:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3098943' 00:30:33.340 killing process with pid 3098943 00:30:33.340 21:20:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3098943 00:30:33.340 21:20:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3098943 00:30:34.716 21:20:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:34.716 21:20:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:34.716 21:20:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:34.716 21:20:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:30:34.716 21:20:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:30:34.716 21:20:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:34.716 21:20:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:30:34.716 21:20:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:34.716 21:20:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:34.716 21:20:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.716 21:20:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:34.716 21:20:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.622 21:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:36.622 00:30:36.622 real 0m7.301s 00:30:36.622 user 0m3.915s 00:30:36.622 sys 0m2.052s 00:30:36.622 21:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:36.622 21:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:36.622 ************************************ 00:30:36.622 END TEST nvmf_async_init 00:30:36.622 ************************************ 00:30:36.622 21:20:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:36.622 21:20:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:36.622 21:20:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:36.622 21:20:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.622 ************************************ 00:30:36.622 START TEST dma 00:30:36.622 ************************************ 00:30:36.622 21:20:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:36.622 * Looking for test storage... 00:30:36.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:36.622 21:20:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:36.622 21:20:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:30:36.622 21:20:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:36.881 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:36.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.882 --rc genhtml_branch_coverage=1 00:30:36.882 --rc genhtml_function_coverage=1 00:30:36.882 --rc genhtml_legend=1 00:30:36.882 --rc geninfo_all_blocks=1 00:30:36.882 --rc geninfo_unexecuted_blocks=1 00:30:36.882 00:30:36.882 ' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:36.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.882 --rc genhtml_branch_coverage=1 00:30:36.882 --rc genhtml_function_coverage=1 00:30:36.882 --rc genhtml_legend=1 00:30:36.882 --rc geninfo_all_blocks=1 00:30:36.882 --rc geninfo_unexecuted_blocks=1 00:30:36.882 00:30:36.882 ' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:36.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.882 --rc genhtml_branch_coverage=1 00:30:36.882 --rc genhtml_function_coverage=1 00:30:36.882 --rc genhtml_legend=1 00:30:36.882 --rc geninfo_all_blocks=1 00:30:36.882 --rc geninfo_unexecuted_blocks=1 00:30:36.882 00:30:36.882 ' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:36.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.882 --rc genhtml_branch_coverage=1 00:30:36.882 --rc genhtml_function_coverage=1 00:30:36.882 --rc genhtml_legend=1 00:30:36.882 --rc geninfo_all_blocks=1 00:30:36.882 --rc geninfo_unexecuted_blocks=1 00:30:36.882 00:30:36.882 ' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:36.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:30:36.882 00:30:36.882 real 0m0.158s 00:30:36.882 user 0m0.110s 00:30:36.882 sys 0m0.057s 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:36.882 ************************************ 00:30:36.882 END TEST dma 00:30:36.882 ************************************ 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.882 ************************************ 00:30:36.882 START TEST nvmf_identify 00:30:36.882 ************************************ 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:36.882 * Looking for test storage... 00:30:36.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:36.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.882 --rc genhtml_branch_coverage=1 00:30:36.882 --rc genhtml_function_coverage=1 00:30:36.882 --rc genhtml_legend=1 00:30:36.882 --rc geninfo_all_blocks=1 00:30:36.882 --rc geninfo_unexecuted_blocks=1 00:30:36.882 00:30:36.882 ' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:36.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.882 --rc genhtml_branch_coverage=1 00:30:36.882 --rc genhtml_function_coverage=1 00:30:36.882 --rc genhtml_legend=1 00:30:36.882 --rc geninfo_all_blocks=1 00:30:36.882 --rc geninfo_unexecuted_blocks=1 00:30:36.882 00:30:36.882 ' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:36.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.882 --rc genhtml_branch_coverage=1 00:30:36.882 --rc genhtml_function_coverage=1 00:30:36.882 --rc genhtml_legend=1 00:30:36.882 --rc geninfo_all_blocks=1 00:30:36.882 --rc geninfo_unexecuted_blocks=1 00:30:36.882 00:30:36.882 ' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:36.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.882 --rc genhtml_branch_coverage=1 00:30:36.882 --rc genhtml_function_coverage=1 00:30:36.882 --rc genhtml_legend=1 00:30:36.882 --rc geninfo_all_blocks=1 00:30:36.882 --rc geninfo_unexecuted_blocks=1 00:30:36.882 00:30:36.882 ' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:36.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:36.882 21:20:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:39.414 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:39.414 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:39.414 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:39.414 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:39.414 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:39.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:39.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:30:39.414 00:30:39.414 --- 10.0.0.2 ping statistics --- 00:30:39.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.414 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:39.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:39.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:30:39.415 00:30:39.415 --- 10.0.0.1 ping statistics --- 00:30:39.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.415 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3101340 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3101340 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3101340 ']' 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:39.415 21:20:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:39.415 [2024-11-19 21:20:12.957628] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:30:39.415 [2024-11-19 21:20:12.957770] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.415 [2024-11-19 21:20:13.101986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:39.672 [2024-11-19 21:20:13.238485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.672 [2024-11-19 21:20:13.238556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.672 [2024-11-19 21:20:13.238582] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.672 [2024-11-19 21:20:13.238605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.672 [2024-11-19 21:20:13.238624] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.672 [2024-11-19 21:20:13.241425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.672 [2024-11-19 21:20:13.241500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:39.672 [2024-11-19 21:20:13.241599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.672 [2024-11-19 21:20:13.241604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:40.236 21:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:40.236 21:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:30:40.236 21:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:40.236 21:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.236 21:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:40.236 [2024-11-19 21:20:13.972285] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.236 21:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.236 21:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:40.236 21:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:40.236 21:20:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:40.236 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:40.236 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.236 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:40.495 Malloc0 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:40.495 [2024-11-19 21:20:14.114181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:40.495 [ 00:30:40.495 { 00:30:40.495 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:40.495 "subtype": "Discovery", 00:30:40.495 "listen_addresses": [ 00:30:40.495 { 00:30:40.495 "trtype": "TCP", 00:30:40.495 "adrfam": "IPv4", 00:30:40.495 "traddr": "10.0.0.2", 00:30:40.495 "trsvcid": "4420" 00:30:40.495 } 00:30:40.495 ], 00:30:40.495 "allow_any_host": true, 00:30:40.495 "hosts": [] 00:30:40.495 }, 00:30:40.495 { 00:30:40.495 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:40.495 "subtype": "NVMe", 00:30:40.495 "listen_addresses": [ 00:30:40.495 { 00:30:40.495 "trtype": "TCP", 00:30:40.495 "adrfam": "IPv4", 00:30:40.495 "traddr": "10.0.0.2", 00:30:40.495 "trsvcid": "4420" 00:30:40.495 } 00:30:40.495 ], 00:30:40.495 "allow_any_host": true, 00:30:40.495 "hosts": [], 00:30:40.495 "serial_number": "SPDK00000000000001", 00:30:40.495 "model_number": "SPDK bdev Controller", 00:30:40.495 "max_namespaces": 32, 00:30:40.495 "min_cntlid": 1, 00:30:40.495 "max_cntlid": 65519, 00:30:40.495 "namespaces": [ 00:30:40.495 { 00:30:40.495 "nsid": 1, 00:30:40.495 "bdev_name": "Malloc0", 00:30:40.495 "name": "Malloc0", 00:30:40.495 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:40.495 "eui64": "ABCDEF0123456789", 00:30:40.495 "uuid": "5554e2b9-c06f-4d8d-8718-080aec67014d" 00:30:40.495 } 00:30:40.495 ] 00:30:40.495 } 00:30:40.495 ] 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.495 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:40.495 [2024-11-19 21:20:14.181153] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:30:40.495 [2024-11-19 21:20:14.181273] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3101493 ] 00:30:40.495 [2024-11-19 21:20:14.260683] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:30:40.495 [2024-11-19 21:20:14.260805] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:40.495 [2024-11-19 21:20:14.260828] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:40.495 [2024-11-19 21:20:14.260861] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:40.495 [2024-11-19 21:20:14.260885] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:40.495 [2024-11-19 21:20:14.261843] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:30:40.495 [2024-11-19 21:20:14.261938] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:40.495 [2024-11-19 21:20:14.272091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:40.495 [2024-11-19 21:20:14.272130] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:40.495 [2024-11-19 21:20:14.272147] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:40.495 [2024-11-19 21:20:14.272158] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:40.495 [2024-11-19 21:20:14.272249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.495 [2024-11-19 21:20:14.272270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.495 [2024-11-19 21:20:14.272284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:40.495 [2024-11-19 21:20:14.272324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:40.495 [2024-11-19 21:20:14.272380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:40.495 [2024-11-19 21:20:14.280115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:40.495 [2024-11-19 21:20:14.280148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:40.495 [2024-11-19 21:20:14.280163] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:40.495 [2024-11-19 21:20:14.280177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:40.495 [2024-11-19 21:20:14.280204] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:40.495 [2024-11-19 21:20:14.280243] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:30:40.496 [2024-11-19 21:20:14.280267] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:30:40.496 [2024-11-19 21:20:14.280295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.280315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.280329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:40.496 [2024-11-19 21:20:14.280351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.496 [2024-11-19 21:20:14.280403] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:40.496 [2024-11-19 21:20:14.280609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:40.496 [2024-11-19 21:20:14.280634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:40.496 [2024-11-19 21:20:14.280647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.280659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:40.496 [2024-11-19 21:20:14.280696] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:30:40.496 [2024-11-19 21:20:14.280722] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:30:40.496 [2024-11-19 21:20:14.280744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.280781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.280796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:40.496 [2024-11-19 21:20:14.280820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.496 [2024-11-19 21:20:14.280855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:40.496 [2024-11-19 21:20:14.281005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:40.496 [2024-11-19 21:20:14.281033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:40.496 [2024-11-19 21:20:14.281047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.281059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:40.496 [2024-11-19 21:20:14.281084] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:30:40.496 [2024-11-19 21:20:14.281115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:40.496 [2024-11-19 21:20:14.281143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.281157] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.281171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:40.496 [2024-11-19 21:20:14.281191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.496 [2024-11-19 21:20:14.281225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:40.496 [2024-11-19 21:20:14.281330] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:40.496 [2024-11-19 21:20:14.281351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:40.496 [2024-11-19 21:20:14.281363] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.281374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:40.496 [2024-11-19 21:20:14.281390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:40.496 [2024-11-19 21:20:14.281418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.281435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.281453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:40.496 [2024-11-19 21:20:14.281499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.496 [2024-11-19 21:20:14.281555] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:40.496 [2024-11-19 21:20:14.281726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:40.496 [2024-11-19 21:20:14.281749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:40.496 [2024-11-19 21:20:14.281761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.281772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:40.496 [2024-11-19 21:20:14.281787] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:40.496 [2024-11-19 21:20:14.281802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:40.496 [2024-11-19 21:20:14.281824] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:40.496 [2024-11-19 21:20:14.281942] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:30:40.496 [2024-11-19 21:20:14.281957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:40.496 [2024-11-19 21:20:14.281980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.281994] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.282006] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:40.496 [2024-11-19 21:20:14.282031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.496 [2024-11-19 21:20:14.282097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:40.496 [2024-11-19 21:20:14.282230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:40.496 [2024-11-19 21:20:14.282256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:40.496 [2024-11-19 21:20:14.282270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.282281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:40.496 [2024-11-19 21:20:14.282296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:40.496 [2024-11-19 21:20:14.282324] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.282340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.282352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:40.496 [2024-11-19 21:20:14.282377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.496 [2024-11-19 21:20:14.282433] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:40.496 [2024-11-19 21:20:14.282621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:40.496 [2024-11-19 21:20:14.282644] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:40.496 [2024-11-19 21:20:14.282656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.282667] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:40.496 [2024-11-19 21:20:14.282689] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:40.496 [2024-11-19 21:20:14.282706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:40.496 [2024-11-19 21:20:14.282730] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:30:40.496 [2024-11-19 21:20:14.282760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:40.496 [2024-11-19 21:20:14.282792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.282809] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:40.496 [2024-11-19 21:20:14.282830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.496 [2024-11-19 21:20:14.282863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:40.496 [2024-11-19 21:20:14.283064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:40.496 [2024-11-19 21:20:14.283109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:40.496 [2024-11-19 21:20:14.283124] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.283136] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:40.496 [2024-11-19 21:20:14.283151] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:40.496 [2024-11-19 21:20:14.283163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.283184] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.283198] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.283229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:40.496 [2024-11-19 21:20:14.283247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:40.496 [2024-11-19 21:20:14.283259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.283270] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:40.496 [2024-11-19 21:20:14.283295] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:30:40.496 [2024-11-19 21:20:14.283317] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:30:40.496 [2024-11-19 21:20:14.283332] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:30:40.496 [2024-11-19 21:20:14.283351] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:30:40.496 [2024-11-19 21:20:14.283385] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:30:40.496 [2024-11-19 21:20:14.283399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:30:40.496 [2024-11-19 21:20:14.283430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:40.496 [2024-11-19 21:20:14.283452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.496 [2024-11-19 21:20:14.283466] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.497 [2024-11-19 21:20:14.283483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:40.497 [2024-11-19 21:20:14.283504] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:40.497 [2024-11-19 21:20:14.283537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:40.497 [2024-11-19 21:20:14.283694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:40.497 [2024-11-19 21:20:14.283717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:40.497 [2024-11-19 21:20:14.283729] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:40.497 [2024-11-19 21:20:14.283740] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:40.497 [2024-11-19 21:20:14.283760] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.497 [2024-11-19 21:20:14.283782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.497 [2024-11-19 21:20:14.283797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:40.497 [2024-11-19 21:20:14.283821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.497 [2024-11-19 21:20:14.283840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.497 [2024-11-19 21:20:14.283853] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.497 [2024-11-19 21:20:14.283864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:40.497 [2024-11-19 21:20:14.283881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.497 [2024-11-19 21:20:14.283898] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.497 [2024-11-19 21:20:14.283910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.497 [2024-11-19 21:20:14.283920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:40.497 [2024-11-19 21:20:14.283937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.497 [2024-11-19 21:20:14.283953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.497 [2024-11-19 21:20:14.283970] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.497 [2024-11-19 21:20:14.283982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:40.497 [2024-11-19 21:20:14.283999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.497 [2024-11-19 21:20:14.284015] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:40.497 [2024-11-19 21:20:14.284043] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:40.756 [2024-11-19 21:20:14.284064] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.756 [2024-11-19 21:20:14.288115] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:40.756 [2024-11-19 21:20:14.288137] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.756 [2024-11-19 21:20:14.288175] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:40.756 [2024-11-19 21:20:14.288193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:40.756 [2024-11-19 21:20:14.288213] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:40.756 [2024-11-19 21:20:14.288228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:40.756 [2024-11-19 21:20:14.288241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:40.756 [2024-11-19 21:20:14.288465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:40.756 [2024-11-19 21:20:14.288489] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:40.756 [2024-11-19 21:20:14.288501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:40.756 [2024-11-19 21:20:14.288513] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:40.756 [2024-11-19 21:20:14.288529] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:30:40.756 [2024-11-19 21:20:14.288546] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:30:40.756 [2024-11-19 21:20:14.288580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.756 [2024-11-19 21:20:14.288597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:40.756 [2024-11-19 21:20:14.288638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.756 [2024-11-19 21:20:14.288672] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:40.756 [2024-11-19 21:20:14.288847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:40.756 [2024-11-19 21:20:14.288870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:40.756 [2024-11-19 21:20:14.288889] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:40.756 [2024-11-19 21:20:14.288901] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:40.756 [2024-11-19 21:20:14.288915] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:40.756 [2024-11-19 21:20:14.288927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.756 [2024-11-19 21:20:14.288955] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:40.756 [2024-11-19 21:20:14.288972] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:40.756 [2024-11-19 21:20:14.288991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:40.756 [2024-11-19 21:20:14.289014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:40.756 [2024-11-19 21:20:14.289027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:40.756 [2024-11-19 21:20:14.289039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:40.756 [2024-11-19 21:20:14.289085] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:30:40.756 [2024-11-19 21:20:14.289167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.756 [2024-11-19 21:20:14.289186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:40.756 [2024-11-19 21:20:14.289212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.756 [2024-11-19 21:20:14.289234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.756 [2024-11-19 21:20:14.289248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.756 [2024-11-19 21:20:14.289260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:40.756 [2024-11-19 21:20:14.289278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.756 [2024-11-19 21:20:14.289322] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:40.756 [2024-11-19 21:20:14.289342] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:40.756 [2024-11-19 21:20:14.289584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:40.756 [2024-11-19 21:20:14.289607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:40.756 [2024-11-19 21:20:14.289635] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:40.756 [2024-11-19 21:20:14.289653] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=1024, cccid=4 00:30:40.756 [2024-11-19 21:20:14.289666] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=1024 00:30:40.756 [2024-11-19 21:20:14.289681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.756 [2024-11-19 21:20:14.289704] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:40.756 [2024-11-19 21:20:14.289719] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:40.756 [2024-11-19 21:20:14.289734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:40.756 [2024-11-19 21:20:14.289749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:40.756 [2024-11-19 21:20:14.289760] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:40.756 [2024-11-19 21:20:14.289772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:40.756 [2024-11-19 21:20:14.330184] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:40.756 [2024-11-19 21:20:14.330216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:40.756 [2024-11-19 21:20:14.330229] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:40.756 [2024-11-19 21:20:14.330242] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:40.756 [2024-11-19 21:20:14.330293] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.756 [2024-11-19 21:20:14.330313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:40.756 [2024-11-19 21:20:14.330338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.756 [2024-11-19 21:20:14.330389] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:40.756 [2024-11-19 21:20:14.330544] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:40.756 [2024-11-19 21:20:14.330565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:40.756 [2024-11-19 21:20:14.330577] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:40.756 [2024-11-19 21:20:14.330589] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=3072, cccid=4 00:30:40.756 [2024-11-19 21:20:14.330601] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=3072 00:30:40.756 [2024-11-19 21:20:14.330613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.756 [2024-11-19 21:20:14.330645] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:40.756 [2024-11-19 21:20:14.330662] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:40.757 [2024-11-19 21:20:14.375106] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:40.757 [2024-11-19 21:20:14.375136] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:40.757 [2024-11-19 21:20:14.375177] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:40.757 [2024-11-19 21:20:14.375190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:40.757 [2024-11-19 21:20:14.375224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.757 [2024-11-19 21:20:14.375242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:40.757 [2024-11-19 21:20:14.375265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.757 [2024-11-19 21:20:14.375316] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:40.757 [2024-11-19 21:20:14.375486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:40.757 [2024-11-19 21:20:14.375513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:40.757 [2024-11-19 21:20:14.375526] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:40.757 [2024-11-19 21:20:14.375538] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8, cccid=4 00:30:40.757 [2024-11-19 21:20:14.375550] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=8 00:30:40.757 [2024-11-19 21:20:14.375562] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.757 [2024-11-19 21:20:14.375580] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:40.757 [2024-11-19 21:20:14.375608] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:40.757 [2024-11-19 21:20:14.416172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:40.757 [2024-11-19 21:20:14.416201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:40.757 [2024-11-19 21:20:14.416215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:40.757 [2024-11-19 21:20:14.416227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:40.757 ===================================================== 00:30:40.757 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:40.757 ===================================================== 00:30:40.757 Controller Capabilities/Features 00:30:40.757 ================================ 00:30:40.757 Vendor ID: 0000 00:30:40.757 Subsystem Vendor ID: 0000 00:30:40.757 Serial Number: .................... 00:30:40.757 Model Number: ........................................ 00:30:40.757 Firmware Version: 25.01 00:30:40.757 Recommended Arb Burst: 0 00:30:40.757 IEEE OUI Identifier: 00 00 00 00:30:40.757 Multi-path I/O 00:30:40.757 May have multiple subsystem ports: No 00:30:40.757 May have multiple controllers: No 00:30:40.757 Associated with SR-IOV VF: No 00:30:40.757 Max Data Transfer Size: 131072 00:30:40.757 Max Number of Namespaces: 0 00:30:40.757 Max Number of I/O Queues: 1024 00:30:40.757 NVMe Specification Version (VS): 1.3 00:30:40.757 NVMe Specification Version (Identify): 1.3 00:30:40.757 Maximum Queue Entries: 128 00:30:40.757 Contiguous Queues Required: Yes 00:30:40.757 Arbitration Mechanisms Supported 00:30:40.757 Weighted Round Robin: Not Supported 00:30:40.757 Vendor Specific: Not Supported 00:30:40.757 Reset Timeout: 15000 ms 00:30:40.757 Doorbell Stride: 4 bytes 00:30:40.757 NVM Subsystem Reset: Not Supported 00:30:40.757 Command Sets Supported 00:30:40.757 NVM Command Set: Supported 00:30:40.757 Boot Partition: Not Supported 00:30:40.757 Memory Page Size Minimum: 4096 bytes 00:30:40.757 Memory Page Size Maximum: 4096 bytes 00:30:40.757 Persistent Memory Region: Not Supported 00:30:40.757 Optional Asynchronous Events Supported 00:30:40.757 Namespace Attribute Notices: Not Supported 00:30:40.757 Firmware Activation Notices: Not Supported 00:30:40.757 ANA Change Notices: Not Supported 00:30:40.757 PLE Aggregate Log Change Notices: Not Supported 00:30:40.757 LBA Status Info Alert Notices: Not Supported 00:30:40.757 EGE Aggregate Log Change Notices: Not Supported 00:30:40.757 Normal NVM Subsystem Shutdown event: Not Supported 00:30:40.757 Zone Descriptor Change Notices: Not Supported 00:30:40.757 Discovery Log Change Notices: Supported 00:30:40.757 Controller Attributes 00:30:40.757 128-bit Host Identifier: Not Supported 00:30:40.757 Non-Operational Permissive Mode: Not Supported 00:30:40.757 NVM Sets: Not Supported 00:30:40.757 Read Recovery Levels: Not Supported 00:30:40.757 Endurance Groups: Not Supported 00:30:40.757 Predictable Latency Mode: Not Supported 00:30:40.757 Traffic Based Keep ALive: Not Supported 00:30:40.757 Namespace Granularity: Not Supported 00:30:40.757 SQ Associations: Not Supported 00:30:40.757 UUID List: Not Supported 00:30:40.757 Multi-Domain Subsystem: Not Supported 00:30:40.757 Fixed Capacity Management: Not Supported 00:30:40.757 Variable Capacity Management: Not Supported 00:30:40.757 Delete Endurance Group: Not Supported 00:30:40.757 Delete NVM Set: Not Supported 00:30:40.757 Extended LBA Formats Supported: Not Supported 00:30:40.757 Flexible Data Placement Supported: Not Supported 00:30:40.757 00:30:40.757 Controller Memory Buffer Support 00:30:40.757 ================================ 00:30:40.757 Supported: No 00:30:40.757 00:30:40.757 Persistent Memory Region Support 00:30:40.757 ================================ 00:30:40.757 Supported: No 00:30:40.757 00:30:40.757 Admin Command Set Attributes 00:30:40.757 ============================ 00:30:40.757 Security Send/Receive: Not Supported 00:30:40.757 Format NVM: Not Supported 00:30:40.757 Firmware Activate/Download: Not Supported 00:30:40.757 Namespace Management: Not Supported 00:30:40.757 Device Self-Test: Not Supported 00:30:40.757 Directives: Not Supported 00:30:40.757 NVMe-MI: Not Supported 00:30:40.757 Virtualization Management: Not Supported 00:30:40.757 Doorbell Buffer Config: Not Supported 00:30:40.757 Get LBA Status Capability: Not Supported 00:30:40.757 Command & Feature Lockdown Capability: Not Supported 00:30:40.757 Abort Command Limit: 1 00:30:40.757 Async Event Request Limit: 4 00:30:40.757 Number of Firmware Slots: N/A 00:30:40.757 Firmware Slot 1 Read-Only: N/A 00:30:40.757 Firmware Activation Without Reset: N/A 00:30:40.757 Multiple Update Detection Support: N/A 00:30:40.757 Firmware Update Granularity: No Information Provided 00:30:40.757 Per-Namespace SMART Log: No 00:30:40.757 Asymmetric Namespace Access Log Page: Not Supported 00:30:40.757 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:40.757 Command Effects Log Page: Not Supported 00:30:40.757 Get Log Page Extended Data: Supported 00:30:40.757 Telemetry Log Pages: Not Supported 00:30:40.757 Persistent Event Log Pages: Not Supported 00:30:40.757 Supported Log Pages Log Page: May Support 00:30:40.757 Commands Supported & Effects Log Page: Not Supported 00:30:40.757 Feature Identifiers & Effects Log Page:May Support 00:30:40.757 NVMe-MI Commands & Effects Log Page: May Support 00:30:40.757 Data Area 4 for Telemetry Log: Not Supported 00:30:40.757 Error Log Page Entries Supported: 128 00:30:40.757 Keep Alive: Not Supported 00:30:40.757 00:30:40.757 NVM Command Set Attributes 00:30:40.757 ========================== 00:30:40.757 Submission Queue Entry Size 00:30:40.757 Max: 1 00:30:40.757 Min: 1 00:30:40.757 Completion Queue Entry Size 00:30:40.757 Max: 1 00:30:40.757 Min: 1 00:30:40.757 Number of Namespaces: 0 00:30:40.757 Compare Command: Not Supported 00:30:40.758 Write Uncorrectable Command: Not Supported 00:30:40.758 Dataset Management Command: Not Supported 00:30:40.758 Write Zeroes Command: Not Supported 00:30:40.758 Set Features Save Field: Not Supported 00:30:40.758 Reservations: Not Supported 00:30:40.758 Timestamp: Not Supported 00:30:40.758 Copy: Not Supported 00:30:40.758 Volatile Write Cache: Not Present 00:30:40.758 Atomic Write Unit (Normal): 1 00:30:40.758 Atomic Write Unit (PFail): 1 00:30:40.758 Atomic Compare & Write Unit: 1 00:30:40.758 Fused Compare & Write: Supported 00:30:40.758 Scatter-Gather List 00:30:40.758 SGL Command Set: Supported 00:30:40.758 SGL Keyed: Supported 00:30:40.758 SGL Bit Bucket Descriptor: Not Supported 00:30:40.758 SGL Metadata Pointer: Not Supported 00:30:40.758 Oversized SGL: Not Supported 00:30:40.758 SGL Metadata Address: Not Supported 00:30:40.758 SGL Offset: Supported 00:30:40.758 Transport SGL Data Block: Not Supported 00:30:40.758 Replay Protected Memory Block: Not Supported 00:30:40.758 00:30:40.758 Firmware Slot Information 00:30:40.758 ========================= 00:30:40.758 Active slot: 0 00:30:40.758 00:30:40.758 00:30:40.758 Error Log 00:30:40.758 ========= 00:30:40.758 00:30:40.758 Active Namespaces 00:30:40.758 ================= 00:30:40.758 Discovery Log Page 00:30:40.758 ================== 00:30:40.758 Generation Counter: 2 00:30:40.758 Number of Records: 2 00:30:40.758 Record Format: 0 00:30:40.758 00:30:40.758 Discovery Log Entry 0 00:30:40.758 ---------------------- 00:30:40.758 Transport Type: 3 (TCP) 00:30:40.758 Address Family: 1 (IPv4) 00:30:40.758 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:40.758 Entry Flags: 00:30:40.758 Duplicate Returned Information: 1 00:30:40.758 Explicit Persistent Connection Support for Discovery: 1 00:30:40.758 Transport Requirements: 00:30:40.758 Secure Channel: Not Required 00:30:40.758 Port ID: 0 (0x0000) 00:30:40.758 Controller ID: 65535 (0xffff) 00:30:40.758 Admin Max SQ Size: 128 00:30:40.758 Transport Service Identifier: 4420 00:30:40.758 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:40.758 Transport Address: 10.0.0.2 00:30:40.758 Discovery Log Entry 1 00:30:40.758 ---------------------- 00:30:40.758 Transport Type: 3 (TCP) 00:30:40.758 Address Family: 1 (IPv4) 00:30:40.758 Subsystem Type: 2 (NVM Subsystem) 00:30:40.758 Entry Flags: 00:30:40.758 Duplicate Returned Information: 0 00:30:40.758 Explicit Persistent Connection Support for Discovery: 0 00:30:40.758 Transport Requirements: 00:30:40.758 Secure Channel: Not Required 00:30:40.758 Port ID: 0 (0x0000) 00:30:40.758 Controller ID: 65535 (0xffff) 00:30:40.758 Admin Max SQ Size: 128 00:30:40.758 Transport Service Identifier: 4420 00:30:40.758 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:40.758 Transport Address: 10.0.0.2 [2024-11-19 21:20:14.416432] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:30:40.758 [2024-11-19 21:20:14.416479] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:40.758 [2024-11-19 21:20:14.416502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.758 [2024-11-19 21:20:14.416518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:40.758 [2024-11-19 21:20:14.416532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.758 [2024-11-19 21:20:14.416545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:40.758 [2024-11-19 21:20:14.416559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.758 [2024-11-19 21:20:14.416571] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:40.758 [2024-11-19 21:20:14.416585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.758 [2024-11-19 21:20:14.416608] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.758 [2024-11-19 21:20:14.416623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.758 [2024-11-19 21:20:14.416635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:40.758 [2024-11-19 21:20:14.416669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.758 [2024-11-19 21:20:14.416707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:40.758 [2024-11-19 21:20:14.416850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:40.758 [2024-11-19 21:20:14.416872] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:40.758 [2024-11-19 21:20:14.416885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:40.758 [2024-11-19 21:20:14.416897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:40.758 [2024-11-19 21:20:14.416920] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.758 [2024-11-19 21:20:14.416936] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.758 [2024-11-19 21:20:14.416948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:40.758 [2024-11-19 21:20:14.416980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.758 [2024-11-19 21:20:14.417037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:40.758 [2024-11-19 21:20:14.417267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:40.758 [2024-11-19 21:20:14.417290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:40.758 [2024-11-19 21:20:14.417302] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:40.758 [2024-11-19 21:20:14.417313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:40.758 [2024-11-19 21:20:14.417334] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:30:40.758 [2024-11-19 21:20:14.417351] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:30:40.758 [2024-11-19 21:20:14.417379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.758 [2024-11-19 21:20:14.417395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.758 [2024-11-19 21:20:14.417424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:40.758 [2024-11-19 21:20:14.417443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.758 [2024-11-19 21:20:14.417475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:40.758 [2024-11-19 21:20:14.417618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:40.758 [2024-11-19 21:20:14.417640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:40.758 [2024-11-19 21:20:14.417652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:40.758 [2024-11-19 21:20:14.417663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:40.758 [2024-11-19 21:20:14.417693] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.758 [2024-11-19 21:20:14.417709] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.758 [2024-11-19 21:20:14.417720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:40.758 [2024-11-19 21:20:14.417739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.758 [2024-11-19 21:20:14.417770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:40.758 [2024-11-19 21:20:14.417888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:40.758 [2024-11-19 21:20:14.417909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:40.758 [2024-11-19 21:20:14.417921] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:40.758 [2024-11-19 21:20:14.417932] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:40.758 [2024-11-19 21:20:14.417960] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.758 [2024-11-19 21:20:14.417976] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.758 [2024-11-19 21:20:14.417987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:40.759 [2024-11-19 21:20:14.418006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.759 [2024-11-19 21:20:14.418037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:40.759 [2024-11-19 21:20:14.422103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:40.759 [2024-11-19 21:20:14.422133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:40.759 [2024-11-19 21:20:14.422146] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:40.759 [2024-11-19 21:20:14.422157] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:40.759 [2024-11-19 21:20:14.422191] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:40.759 [2024-11-19 21:20:14.422208] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:40.759 [2024-11-19 21:20:14.422219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:40.759 [2024-11-19 21:20:14.422238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.759 [2024-11-19 21:20:14.422271] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:40.759 [2024-11-19 21:20:14.422403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:40.759 [2024-11-19 21:20:14.422425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:40.759 [2024-11-19 21:20:14.422437] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:40.759 [2024-11-19 21:20:14.422448] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:40.759 [2024-11-19 21:20:14.422470] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:30:40.759 00:30:40.759 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:40.759 [2024-11-19 21:20:14.534158] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:30:40.759 [2024-11-19 21:20:14.534272] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3101509 ] 00:30:41.019 [2024-11-19 21:20:14.616727] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:30:41.019 [2024-11-19 21:20:14.616847] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:41.019 [2024-11-19 21:20:14.616869] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:41.019 [2024-11-19 21:20:14.616904] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:41.019 [2024-11-19 21:20:14.616928] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:41.019 [2024-11-19 21:20:14.617774] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:30:41.019 [2024-11-19 21:20:14.617849] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:41.019 [2024-11-19 21:20:14.628091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:41.019 [2024-11-19 21:20:14.628129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:41.019 [2024-11-19 21:20:14.628146] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:41.019 [2024-11-19 21:20:14.628159] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:41.019 [2024-11-19 21:20:14.628234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.019 [2024-11-19 21:20:14.628256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.019 [2024-11-19 21:20:14.628277] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:41.019 [2024-11-19 21:20:14.628308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:41.019 [2024-11-19 21:20:14.628350] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:41.019 [2024-11-19 21:20:14.636092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.019 [2024-11-19 21:20:14.636130] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.019 [2024-11-19 21:20:14.636145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.019 [2024-11-19 21:20:14.636160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:41.019 [2024-11-19 21:20:14.636205] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:41.019 [2024-11-19 21:20:14.636233] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:30:41.019 [2024-11-19 21:20:14.636255] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:30:41.019 [2024-11-19 21:20:14.636284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.019 [2024-11-19 21:20:14.636300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.019 [2024-11-19 21:20:14.636318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:41.019 [2024-11-19 21:20:14.636341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.019 [2024-11-19 21:20:14.636410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:41.019 [2024-11-19 21:20:14.636606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.020 [2024-11-19 21:20:14.636634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.020 [2024-11-19 21:20:14.636650] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.636686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:41.020 [2024-11-19 21:20:14.636709] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:30:41.020 [2024-11-19 21:20:14.636751] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:30:41.020 [2024-11-19 21:20:14.636782] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.636811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.636823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:41.020 [2024-11-19 21:20:14.636847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.020 [2024-11-19 21:20:14.636881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:41.020 [2024-11-19 21:20:14.637040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.020 [2024-11-19 21:20:14.637063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.020 [2024-11-19 21:20:14.637088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.637101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:41.020 [2024-11-19 21:20:14.637117] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:30:41.020 [2024-11-19 21:20:14.637144] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:41.020 [2024-11-19 21:20:14.637168] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.637182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.637194] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:41.020 [2024-11-19 21:20:14.637235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.020 [2024-11-19 21:20:14.637269] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:41.020 [2024-11-19 21:20:14.637420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.020 [2024-11-19 21:20:14.637461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.020 [2024-11-19 21:20:14.637477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.637488] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:41.020 [2024-11-19 21:20:14.637504] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:41.020 [2024-11-19 21:20:14.637541] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.637560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.637586] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:41.020 [2024-11-19 21:20:14.637607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.020 [2024-11-19 21:20:14.637654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:41.020 [2024-11-19 21:20:14.637838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.020 [2024-11-19 21:20:14.637861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.020 [2024-11-19 21:20:14.637877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.637891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:41.020 [2024-11-19 21:20:14.637906] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:41.020 [2024-11-19 21:20:14.637921] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:41.020 [2024-11-19 21:20:14.637946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:41.020 [2024-11-19 21:20:14.638095] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:30:41.020 [2024-11-19 21:20:14.638113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:41.020 [2024-11-19 21:20:14.638155] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.638171] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.638183] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:41.020 [2024-11-19 21:20:14.638202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.020 [2024-11-19 21:20:14.638235] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:41.020 [2024-11-19 21:20:14.638405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.020 [2024-11-19 21:20:14.638429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.020 [2024-11-19 21:20:14.638442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.638453] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:41.020 [2024-11-19 21:20:14.638469] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:41.020 [2024-11-19 21:20:14.638504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.638521] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.638555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:41.020 [2024-11-19 21:20:14.638575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.020 [2024-11-19 21:20:14.638623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:41.020 [2024-11-19 21:20:14.638814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.020 [2024-11-19 21:20:14.638838] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.020 [2024-11-19 21:20:14.638859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.638875] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:41.020 [2024-11-19 21:20:14.638890] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:41.020 [2024-11-19 21:20:14.638905] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:41.020 [2024-11-19 21:20:14.638931] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:30:41.020 [2024-11-19 21:20:14.638959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:41.020 [2024-11-19 21:20:14.639009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.639025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:41.020 [2024-11-19 21:20:14.639075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.020 [2024-11-19 21:20:14.639124] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:41.020 [2024-11-19 21:20:14.639391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:41.020 [2024-11-19 21:20:14.639415] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:41.020 [2024-11-19 21:20:14.639430] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.639456] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:41.020 [2024-11-19 21:20:14.639491] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:41.020 [2024-11-19 21:20:14.639503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.639523] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.639545] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.639570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.020 [2024-11-19 21:20:14.639588] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.020 [2024-11-19 21:20:14.639605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.639617] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:41.020 [2024-11-19 21:20:14.639641] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:30:41.020 [2024-11-19 21:20:14.639656] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:30:41.020 [2024-11-19 21:20:14.639684] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:30:41.020 [2024-11-19 21:20:14.639696] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:30:41.020 [2024-11-19 21:20:14.639708] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:30:41.020 [2024-11-19 21:20:14.639729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:30:41.020 [2024-11-19 21:20:14.639759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:41.020 [2024-11-19 21:20:14.639781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.639798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.639811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:41.020 [2024-11-19 21:20:14.639831] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:41.020 [2024-11-19 21:20:14.639863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:41.020 [2024-11-19 21:20:14.644096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.020 [2024-11-19 21:20:14.644126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.020 [2024-11-19 21:20:14.644140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.644151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:41.020 [2024-11-19 21:20:14.644176] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.020 [2024-11-19 21:20:14.644192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.644204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:41.021 [2024-11-19 21:20:14.644235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:41.021 [2024-11-19 21:20:14.644255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.644267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.644278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:41.021 [2024-11-19 21:20:14.644294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:41.021 [2024-11-19 21:20:14.644314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.644332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.644343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:41.021 [2024-11-19 21:20:14.644360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:41.021 [2024-11-19 21:20:14.644376] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.644388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.644413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:41.021 [2024-11-19 21:20:14.644429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:41.021 [2024-11-19 21:20:14.644443] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:41.021 [2024-11-19 21:20:14.644483] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:41.021 [2024-11-19 21:20:14.644505] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.644527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:41.021 [2024-11-19 21:20:14.644546] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.021 [2024-11-19 21:20:14.644611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:41.021 [2024-11-19 21:20:14.644631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:41.021 [2024-11-19 21:20:14.644644] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:41.021 [2024-11-19 21:20:14.644657] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:41.021 [2024-11-19 21:20:14.644674] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:41.021 [2024-11-19 21:20:14.644845] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.021 [2024-11-19 21:20:14.644869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.021 [2024-11-19 21:20:14.644882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.644908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:41.021 [2024-11-19 21:20:14.644924] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:30:41.021 [2024-11-19 21:20:14.644940] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:41.021 [2024-11-19 21:20:14.644978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:30:41.021 [2024-11-19 21:20:14.645009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:41.021 [2024-11-19 21:20:14.645033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.645086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.645103] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:41.021 [2024-11-19 21:20:14.645138] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:41.021 [2024-11-19 21:20:14.645173] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:41.021 [2024-11-19 21:20:14.645342] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.021 [2024-11-19 21:20:14.645372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.021 [2024-11-19 21:20:14.645387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.645398] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:41.021 [2024-11-19 21:20:14.645526] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:30:41.021 [2024-11-19 21:20:14.645596] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:41.021 [2024-11-19 21:20:14.645625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.645639] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:41.021 [2024-11-19 21:20:14.645659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.021 [2024-11-19 21:20:14.645697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:41.021 [2024-11-19 21:20:14.645947] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:41.021 [2024-11-19 21:20:14.645976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:41.021 [2024-11-19 21:20:14.645989] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.646002] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:41.021 [2024-11-19 21:20:14.646023] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:41.021 [2024-11-19 21:20:14.646037] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.646079] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.646097] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.646134] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.021 [2024-11-19 21:20:14.646160] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.021 [2024-11-19 21:20:14.646174] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.646185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:41.021 [2024-11-19 21:20:14.646232] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:30:41.021 [2024-11-19 21:20:14.646278] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:30:41.021 [2024-11-19 21:20:14.646318] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:30:41.021 [2024-11-19 21:20:14.646347] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.646362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:41.021 [2024-11-19 21:20:14.646382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.021 [2024-11-19 21:20:14.646439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:41.021 [2024-11-19 21:20:14.646676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:41.021 [2024-11-19 21:20:14.646700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:41.021 [2024-11-19 21:20:14.646713] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.646733] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:41.021 [2024-11-19 21:20:14.646749] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:41.021 [2024-11-19 21:20:14.646776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.646794] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.646808] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.646826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.021 [2024-11-19 21:20:14.646843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.021 [2024-11-19 21:20:14.646859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.646872] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:41.021 [2024-11-19 21:20:14.646915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:41.021 [2024-11-19 21:20:14.646961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:41.021 [2024-11-19 21:20:14.646988] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.647008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:41.021 [2024-11-19 21:20:14.647028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.021 [2024-11-19 21:20:14.647092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:41.021 [2024-11-19 21:20:14.647260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:41.021 [2024-11-19 21:20:14.647286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:41.021 [2024-11-19 21:20:14.647318] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.647333] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:41.021 [2024-11-19 21:20:14.647346] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:41.021 [2024-11-19 21:20:14.647362] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.647403] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.647418] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.647442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.021 [2024-11-19 21:20:14.647459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.021 [2024-11-19 21:20:14.647471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.021 [2024-11-19 21:20:14.647482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:41.021 [2024-11-19 21:20:14.647509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:41.022 [2024-11-19 21:20:14.647551] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:30:41.022 [2024-11-19 21:20:14.647575] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:30:41.022 [2024-11-19 21:20:14.647593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:41.022 [2024-11-19 21:20:14.647606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:41.022 [2024-11-19 21:20:14.647620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:30:41.022 [2024-11-19 21:20:14.647647] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:30:41.022 [2024-11-19 21:20:14.647661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:30:41.022 [2024-11-19 21:20:14.647675] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:30:41.022 [2024-11-19 21:20:14.647734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.647754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:41.022 [2024-11-19 21:20:14.647774] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-11-19 21:20:14.647798] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.647813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.647823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:41.022 [2024-11-19 21:20:14.647840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:41.022 [2024-11-19 21:20:14.647872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:41.022 [2024-11-19 21:20:14.647905] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:41.022 [2024-11-19 21:20:14.652090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.022 [2024-11-19 21:20:14.652116] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.022 [2024-11-19 21:20:14.652129] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.652142] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:41.022 [2024-11-19 21:20:14.652169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.022 [2024-11-19 21:20:14.652186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.022 [2024-11-19 21:20:14.652198] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.652213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:41.022 [2024-11-19 21:20:14.652253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.652272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:41.022 [2024-11-19 21:20:14.652291] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-11-19 21:20:14.652326] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:41.022 [2024-11-19 21:20:14.652486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.022 [2024-11-19 21:20:14.652520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.022 [2024-11-19 21:20:14.652536] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.652548] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:41.022 [2024-11-19 21:20:14.652576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.652594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:41.022 [2024-11-19 21:20:14.652622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-11-19 21:20:14.652676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:41.022 [2024-11-19 21:20:14.652831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.022 [2024-11-19 21:20:14.652854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.022 [2024-11-19 21:20:14.652871] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.652884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:41.022 [2024-11-19 21:20:14.652911] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.652930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:41.022 [2024-11-19 21:20:14.652950] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-11-19 21:20:14.652982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:41.022 [2024-11-19 21:20:14.653132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.022 [2024-11-19 21:20:14.653157] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.022 [2024-11-19 21:20:14.653169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.653181] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:41.022 [2024-11-19 21:20:14.653229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.653249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:41.022 [2024-11-19 21:20:14.653270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-11-19 21:20:14.653294] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.653309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:41.022 [2024-11-19 21:20:14.653329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-11-19 21:20:14.653350] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.653365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000015700) 00:30:41.022 [2024-11-19 21:20:14.653393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-11-19 21:20:14.653443] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.653459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:41.022 [2024-11-19 21:20:14.653483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-11-19 21:20:14.653517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:41.022 [2024-11-19 21:20:14.653553] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:41.022 [2024-11-19 21:20:14.653566] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:30:41.022 [2024-11-19 21:20:14.653579] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:41.022 [2024-11-19 21:20:14.653882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:41.022 [2024-11-19 21:20:14.653931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:41.022 [2024-11-19 21:20:14.653945] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.653969] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8192, cccid=5 00:30:41.022 [2024-11-19 21:20:14.653987] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000015700): expected_datao=0, payload_size=8192 00:30:41.022 [2024-11-19 21:20:14.654007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.654042] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.654064] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.654089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:41.022 [2024-11-19 21:20:14.654106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:41.022 [2024-11-19 21:20:14.654118] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.654129] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=4 00:30:41.022 [2024-11-19 21:20:14.654141] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:41.022 [2024-11-19 21:20:14.654153] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.654180] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.654195] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.654224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:41.022 [2024-11-19 21:20:14.654240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:41.022 [2024-11-19 21:20:14.654251] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.654261] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=6 00:30:41.022 [2024-11-19 21:20:14.654273] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:41.022 [2024-11-19 21:20:14.654284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.654304] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.654318] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.654332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:41.022 [2024-11-19 21:20:14.654351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:41.022 [2024-11-19 21:20:14.654364] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.654375] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=7 00:30:41.022 [2024-11-19 21:20:14.654405] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:41.022 [2024-11-19 21:20:14.654417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.654433] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:41.022 [2024-11-19 21:20:14.654446] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:41.023 [2024-11-19 21:20:14.654464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.023 [2024-11-19 21:20:14.654480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.023 [2024-11-19 21:20:14.654491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.023 [2024-11-19 21:20:14.654514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:41.023 [2024-11-19 21:20:14.654551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.023 [2024-11-19 21:20:14.654569] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.023 [2024-11-19 21:20:14.654580] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.023 [2024-11-19 21:20:14.654591] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:41.023 [2024-11-19 21:20:14.654617] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.023 [2024-11-19 21:20:14.654635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.023 [2024-11-19 21:20:14.654647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.023 [2024-11-19 21:20:14.654657] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000015700 00:30:41.023 [2024-11-19 21:20:14.654676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.023 [2024-11-19 21:20:14.654696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.023 [2024-11-19 21:20:14.654709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.023 [2024-11-19 21:20:14.654720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:41.023 ===================================================== 00:30:41.023 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:41.023 ===================================================== 00:30:41.023 Controller Capabilities/Features 00:30:41.023 ================================ 00:30:41.023 Vendor ID: 8086 00:30:41.023 Subsystem Vendor ID: 8086 00:30:41.023 Serial Number: SPDK00000000000001 00:30:41.023 Model Number: SPDK bdev Controller 00:30:41.023 Firmware Version: 25.01 00:30:41.023 Recommended Arb Burst: 6 00:30:41.023 IEEE OUI Identifier: e4 d2 5c 00:30:41.023 Multi-path I/O 00:30:41.023 May have multiple subsystem ports: Yes 00:30:41.023 May have multiple controllers: Yes 00:30:41.023 Associated with SR-IOV VF: No 00:30:41.023 Max Data Transfer Size: 131072 00:30:41.023 Max Number of Namespaces: 32 00:30:41.023 Max Number of I/O Queues: 127 00:30:41.023 NVMe Specification Version (VS): 1.3 00:30:41.023 NVMe Specification Version (Identify): 1.3 00:30:41.023 Maximum Queue Entries: 128 00:30:41.023 Contiguous Queues Required: Yes 00:30:41.023 Arbitration Mechanisms Supported 00:30:41.023 Weighted Round Robin: Not Supported 00:30:41.023 Vendor Specific: Not Supported 00:30:41.023 Reset Timeout: 15000 ms 00:30:41.023 Doorbell Stride: 4 bytes 00:30:41.023 NVM Subsystem Reset: Not Supported 00:30:41.023 Command Sets Supported 00:30:41.023 NVM Command Set: Supported 00:30:41.023 Boot Partition: Not Supported 00:30:41.023 Memory Page Size Minimum: 4096 bytes 00:30:41.023 Memory Page Size Maximum: 4096 bytes 00:30:41.023 Persistent Memory Region: Not Supported 00:30:41.023 Optional Asynchronous Events Supported 00:30:41.023 Namespace Attribute Notices: Supported 00:30:41.023 Firmware Activation Notices: Not Supported 00:30:41.023 ANA Change Notices: Not Supported 00:30:41.023 PLE Aggregate Log Change Notices: Not Supported 00:30:41.023 LBA Status Info Alert Notices: Not Supported 00:30:41.023 EGE Aggregate Log Change Notices: Not Supported 00:30:41.023 Normal NVM Subsystem Shutdown event: Not Supported 00:30:41.023 Zone Descriptor Change Notices: Not Supported 00:30:41.023 Discovery Log Change Notices: Not Supported 00:30:41.023 Controller Attributes 00:30:41.023 128-bit Host Identifier: Supported 00:30:41.023 Non-Operational Permissive Mode: Not Supported 00:30:41.023 NVM Sets: Not Supported 00:30:41.023 Read Recovery Levels: Not Supported 00:30:41.023 Endurance Groups: Not Supported 00:30:41.023 Predictable Latency Mode: Not Supported 00:30:41.023 Traffic Based Keep ALive: Not Supported 00:30:41.023 Namespace Granularity: Not Supported 00:30:41.023 SQ Associations: Not Supported 00:30:41.023 UUID List: Not Supported 00:30:41.023 Multi-Domain Subsystem: Not Supported 00:30:41.023 Fixed Capacity Management: Not Supported 00:30:41.023 Variable Capacity Management: Not Supported 00:30:41.023 Delete Endurance Group: Not Supported 00:30:41.023 Delete NVM Set: Not Supported 00:30:41.023 Extended LBA Formats Supported: Not Supported 00:30:41.023 Flexible Data Placement Supported: Not Supported 00:30:41.023 00:30:41.023 Controller Memory Buffer Support 00:30:41.023 ================================ 00:30:41.023 Supported: No 00:30:41.023 00:30:41.023 Persistent Memory Region Support 00:30:41.023 ================================ 00:30:41.023 Supported: No 00:30:41.023 00:30:41.023 Admin Command Set Attributes 00:30:41.023 ============================ 00:30:41.023 Security Send/Receive: Not Supported 00:30:41.023 Format NVM: Not Supported 00:30:41.023 Firmware Activate/Download: Not Supported 00:30:41.023 Namespace Management: Not Supported 00:30:41.023 Device Self-Test: Not Supported 00:30:41.023 Directives: Not Supported 00:30:41.023 NVMe-MI: Not Supported 00:30:41.023 Virtualization Management: Not Supported 00:30:41.023 Doorbell Buffer Config: Not Supported 00:30:41.023 Get LBA Status Capability: Not Supported 00:30:41.023 Command & Feature Lockdown Capability: Not Supported 00:30:41.023 Abort Command Limit: 4 00:30:41.023 Async Event Request Limit: 4 00:30:41.023 Number of Firmware Slots: N/A 00:30:41.023 Firmware Slot 1 Read-Only: N/A 00:30:41.023 Firmware Activation Without Reset: N/A 00:30:41.023 Multiple Update Detection Support: N/A 00:30:41.023 Firmware Update Granularity: No Information Provided 00:30:41.023 Per-Namespace SMART Log: No 00:30:41.023 Asymmetric Namespace Access Log Page: Not Supported 00:30:41.023 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:41.023 Command Effects Log Page: Supported 00:30:41.023 Get Log Page Extended Data: Supported 00:30:41.023 Telemetry Log Pages: Not Supported 00:30:41.023 Persistent Event Log Pages: Not Supported 00:30:41.023 Supported Log Pages Log Page: May Support 00:30:41.023 Commands Supported & Effects Log Page: Not Supported 00:30:41.023 Feature Identifiers & Effects Log Page:May Support 00:30:41.023 NVMe-MI Commands & Effects Log Page: May Support 00:30:41.023 Data Area 4 for Telemetry Log: Not Supported 00:30:41.023 Error Log Page Entries Supported: 128 00:30:41.023 Keep Alive: Supported 00:30:41.023 Keep Alive Granularity: 10000 ms 00:30:41.023 00:30:41.023 NVM Command Set Attributes 00:30:41.023 ========================== 00:30:41.023 Submission Queue Entry Size 00:30:41.023 Max: 64 00:30:41.023 Min: 64 00:30:41.023 Completion Queue Entry Size 00:30:41.023 Max: 16 00:30:41.023 Min: 16 00:30:41.023 Number of Namespaces: 32 00:30:41.023 Compare Command: Supported 00:30:41.023 Write Uncorrectable Command: Not Supported 00:30:41.023 Dataset Management Command: Supported 00:30:41.023 Write Zeroes Command: Supported 00:30:41.023 Set Features Save Field: Not Supported 00:30:41.023 Reservations: Supported 00:30:41.023 Timestamp: Not Supported 00:30:41.023 Copy: Supported 00:30:41.023 Volatile Write Cache: Present 00:30:41.023 Atomic Write Unit (Normal): 1 00:30:41.023 Atomic Write Unit (PFail): 1 00:30:41.023 Atomic Compare & Write Unit: 1 00:30:41.023 Fused Compare & Write: Supported 00:30:41.023 Scatter-Gather List 00:30:41.023 SGL Command Set: Supported 00:30:41.023 SGL Keyed: Supported 00:30:41.023 SGL Bit Bucket Descriptor: Not Supported 00:30:41.023 SGL Metadata Pointer: Not Supported 00:30:41.023 Oversized SGL: Not Supported 00:30:41.023 SGL Metadata Address: Not Supported 00:30:41.023 SGL Offset: Supported 00:30:41.023 Transport SGL Data Block: Not Supported 00:30:41.023 Replay Protected Memory Block: Not Supported 00:30:41.023 00:30:41.023 Firmware Slot Information 00:30:41.023 ========================= 00:30:41.023 Active slot: 1 00:30:41.023 Slot 1 Firmware Revision: 25.01 00:30:41.023 00:30:41.023 00:30:41.023 Commands Supported and Effects 00:30:41.023 ============================== 00:30:41.023 Admin Commands 00:30:41.023 -------------- 00:30:41.023 Get Log Page (02h): Supported 00:30:41.023 Identify (06h): Supported 00:30:41.023 Abort (08h): Supported 00:30:41.023 Set Features (09h): Supported 00:30:41.023 Get Features (0Ah): Supported 00:30:41.023 Asynchronous Event Request (0Ch): Supported 00:30:41.023 Keep Alive (18h): Supported 00:30:41.023 I/O Commands 00:30:41.023 ------------ 00:30:41.023 Flush (00h): Supported LBA-Change 00:30:41.023 Write (01h): Supported LBA-Change 00:30:41.024 Read (02h): Supported 00:30:41.024 Compare (05h): Supported 00:30:41.024 Write Zeroes (08h): Supported LBA-Change 00:30:41.024 Dataset Management (09h): Supported LBA-Change 00:30:41.024 Copy (19h): Supported LBA-Change 00:30:41.024 00:30:41.024 Error Log 00:30:41.024 ========= 00:30:41.024 00:30:41.024 Arbitration 00:30:41.024 =========== 00:30:41.024 Arbitration Burst: 1 00:30:41.024 00:30:41.024 Power Management 00:30:41.024 ================ 00:30:41.024 Number of Power States: 1 00:30:41.024 Current Power State: Power State #0 00:30:41.024 Power State #0: 00:30:41.024 Max Power: 0.00 W 00:30:41.024 Non-Operational State: Operational 00:30:41.024 Entry Latency: Not Reported 00:30:41.024 Exit Latency: Not Reported 00:30:41.024 Relative Read Throughput: 0 00:30:41.024 Relative Read Latency: 0 00:30:41.024 Relative Write Throughput: 0 00:30:41.024 Relative Write Latency: 0 00:30:41.024 Idle Power: Not Reported 00:30:41.024 Active Power: Not Reported 00:30:41.024 Non-Operational Permissive Mode: Not Supported 00:30:41.024 00:30:41.024 Health Information 00:30:41.024 ================== 00:30:41.024 Critical Warnings: 00:30:41.024 Available Spare Space: OK 00:30:41.024 Temperature: OK 00:30:41.024 Device Reliability: OK 00:30:41.024 Read Only: No 00:30:41.024 Volatile Memory Backup: OK 00:30:41.024 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:41.024 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:41.024 Available Spare: 0% 00:30:41.024 Available Spare Threshold: 0% 00:30:41.024 Life Percentage Used:[2024-11-19 21:20:14.654922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.024 [2024-11-19 21:20:14.654941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:41.024 [2024-11-19 21:20:14.654961] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.024 [2024-11-19 21:20:14.654994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:41.024 [2024-11-19 21:20:14.655182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.024 [2024-11-19 21:20:14.655207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.024 [2024-11-19 21:20:14.655228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.024 [2024-11-19 21:20:14.655243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:41.024 [2024-11-19 21:20:14.655322] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:30:41.024 [2024-11-19 21:20:14.655355] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:41.024 [2024-11-19 21:20:14.655398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.024 [2024-11-19 21:20:14.655414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:41.024 [2024-11-19 21:20:14.655428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.024 [2024-11-19 21:20:14.655441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:41.024 [2024-11-19 21:20:14.655454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.024 [2024-11-19 21:20:14.655470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:41.024 [2024-11-19 21:20:14.655485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.024 [2024-11-19 21:20:14.655504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.024 [2024-11-19 21:20:14.655518] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.024 [2024-11-19 21:20:14.655534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:41.024 [2024-11-19 21:20:14.655555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.024 [2024-11-19 21:20:14.655593] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:41.024 [2024-11-19 21:20:14.655816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.024 [2024-11-19 21:20:14.655840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.024 [2024-11-19 21:20:14.655854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.024 [2024-11-19 21:20:14.655866] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:41.024 [2024-11-19 21:20:14.655899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.024 [2024-11-19 21:20:14.655915] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.024 [2024-11-19 21:20:14.655926] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:41.024 [2024-11-19 21:20:14.655946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.024 [2024-11-19 21:20:14.656004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:41.024 [2024-11-19 21:20:14.660090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.024 [2024-11-19 21:20:14.660115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.024 [2024-11-19 21:20:14.660128] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.024 [2024-11-19 21:20:14.660140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:41.024 [2024-11-19 21:20:14.660155] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:30:41.024 [2024-11-19 21:20:14.660169] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:30:41.024 [2024-11-19 21:20:14.660199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:41.024 [2024-11-19 21:20:14.660221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:41.024 [2024-11-19 21:20:14.660240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:41.024 [2024-11-19 21:20:14.660264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.024 [2024-11-19 21:20:14.660316] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:41.024 [2024-11-19 21:20:14.660449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:41.024 [2024-11-19 21:20:14.660472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:41.024 [2024-11-19 21:20:14.660489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:41.024 [2024-11-19 21:20:14.660507] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:41.024 [2024-11-19 21:20:14.660534] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 0 milliseconds 00:30:41.024 0% 00:30:41.024 Data Units Read: 0 00:30:41.024 Data Units Written: 0 00:30:41.024 Host Read Commands: 0 00:30:41.024 Host Write Commands: 0 00:30:41.024 Controller Busy Time: 0 minutes 00:30:41.024 Power Cycles: 0 00:30:41.024 Power On Hours: 0 hours 00:30:41.024 Unsafe Shutdowns: 0 00:30:41.024 Unrecoverable Media Errors: 0 00:30:41.024 Lifetime Error Log Entries: 0 00:30:41.024 Warning Temperature Time: 0 minutes 00:30:41.024 Critical Temperature Time: 0 minutes 00:30:41.024 00:30:41.024 Number of Queues 00:30:41.024 ================ 00:30:41.024 Number of I/O Submission Queues: 127 00:30:41.024 Number of I/O Completion Queues: 127 00:30:41.024 00:30:41.024 Active Namespaces 00:30:41.024 ================= 00:30:41.024 Namespace ID:1 00:30:41.024 Error Recovery Timeout: Unlimited 00:30:41.024 Command Set Identifier: NVM (00h) 00:30:41.024 Deallocate: Supported 00:30:41.024 Deallocated/Unwritten Error: Not Supported 00:30:41.024 Deallocated Read Value: Unknown 00:30:41.024 Deallocate in Write Zeroes: Not Supported 00:30:41.024 Deallocated Guard Field: 0xFFFF 00:30:41.024 Flush: Supported 00:30:41.024 Reservation: Supported 00:30:41.024 Namespace Sharing Capabilities: Multiple Controllers 00:30:41.025 Size (in LBAs): 131072 (0GiB) 00:30:41.025 Capacity (in LBAs): 131072 (0GiB) 00:30:41.025 Utilization (in LBAs): 131072 (0GiB) 00:30:41.025 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:41.025 EUI64: ABCDEF0123456789 00:30:41.025 UUID: 5554e2b9-c06f-4d8d-8718-080aec67014d 00:30:41.025 Thin Provisioning: Not Supported 00:30:41.025 Per-NS Atomic Units: Yes 00:30:41.025 Atomic Boundary Size (Normal): 0 00:30:41.025 Atomic Boundary Size (PFail): 0 00:30:41.025 Atomic Boundary Offset: 0 00:30:41.025 Maximum Single Source Range Length: 65535 00:30:41.025 Maximum Copy Length: 65535 00:30:41.025 Maximum Source Range Count: 1 00:30:41.025 NGUID/EUI64 Never Reused: No 00:30:41.025 Namespace Write Protected: No 00:30:41.025 Number of LBA Formats: 1 00:30:41.025 Current LBA Format: LBA Format #00 00:30:41.025 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:41.025 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:41.025 rmmod nvme_tcp 00:30:41.025 rmmod nvme_fabrics 00:30:41.025 rmmod nvme_keyring 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3101340 ']' 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3101340 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3101340 ']' 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3101340 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:41.025 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3101340 00:30:41.282 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:41.282 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:41.282 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3101340' 00:30:41.282 killing process with pid 3101340 00:30:41.282 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3101340 00:30:41.282 21:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3101340 00:30:42.654 21:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:42.654 21:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:42.654 21:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:42.654 21:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:30:42.654 21:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:30:42.654 21:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:42.654 21:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:30:42.654 21:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:42.654 21:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:42.654 21:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.654 21:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.654 21:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:44.556 00:30:44.556 real 0m7.627s 00:30:44.556 user 0m11.213s 00:30:44.556 sys 0m2.290s 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:44.556 ************************************ 00:30:44.556 END TEST nvmf_identify 00:30:44.556 ************************************ 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.556 ************************************ 00:30:44.556 START TEST nvmf_perf 00:30:44.556 ************************************ 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:44.556 * Looking for test storage... 00:30:44.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:44.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.556 --rc genhtml_branch_coverage=1 00:30:44.556 --rc genhtml_function_coverage=1 00:30:44.556 --rc genhtml_legend=1 00:30:44.556 --rc geninfo_all_blocks=1 00:30:44.556 --rc geninfo_unexecuted_blocks=1 00:30:44.556 00:30:44.556 ' 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:44.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.556 --rc genhtml_branch_coverage=1 00:30:44.556 --rc genhtml_function_coverage=1 00:30:44.556 --rc genhtml_legend=1 00:30:44.556 --rc geninfo_all_blocks=1 00:30:44.556 --rc geninfo_unexecuted_blocks=1 00:30:44.556 00:30:44.556 ' 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:44.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.556 --rc genhtml_branch_coverage=1 00:30:44.556 --rc genhtml_function_coverage=1 00:30:44.556 --rc genhtml_legend=1 00:30:44.556 --rc geninfo_all_blocks=1 00:30:44.556 --rc geninfo_unexecuted_blocks=1 00:30:44.556 00:30:44.556 ' 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:44.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.556 --rc genhtml_branch_coverage=1 00:30:44.556 --rc genhtml_function_coverage=1 00:30:44.556 --rc genhtml_legend=1 00:30:44.556 --rc geninfo_all_blocks=1 00:30:44.556 --rc geninfo_unexecuted_blocks=1 00:30:44.556 00:30:44.556 ' 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.556 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:44.814 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.814 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:44.814 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:44.814 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:44.814 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.814 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:44.814 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:44.814 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:44.814 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:44.814 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:44.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:44.815 21:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:46.714 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:46.715 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:46.715 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:46.715 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:46.715 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:46.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:46.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:30:46.715 00:30:46.715 --- 10.0.0.2 ping statistics --- 00:30:46.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.715 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:46.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:46.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:30:46.715 00:30:46.715 --- 10.0.0.1 ping statistics --- 00:30:46.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.715 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3103682 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3103682 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3103682 ']' 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:46.715 21:20:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:46.973 [2024-11-19 21:20:20.593508] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:30:46.974 [2024-11-19 21:20:20.593658] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:46.974 [2024-11-19 21:20:20.740315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:47.231 [2024-11-19 21:20:20.867177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:47.231 [2024-11-19 21:20:20.867263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:47.231 [2024-11-19 21:20:20.867286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:47.231 [2024-11-19 21:20:20.867306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:47.231 [2024-11-19 21:20:20.867323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:47.231 [2024-11-19 21:20:20.869977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:47.231 [2024-11-19 21:20:20.870027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:47.231 [2024-11-19 21:20:20.870077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.231 [2024-11-19 21:20:20.870086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:47.796 21:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:47.796 21:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:30:47.796 21:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:47.796 21:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:47.796 21:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:48.054 21:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:48.054 21:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:48.054 21:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:51.331 21:20:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:51.331 21:20:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:51.331 21:20:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:51.331 21:20:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:51.895 21:20:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:51.895 21:20:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:51.896 21:20:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:51.896 21:20:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:51.896 21:20:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:52.153 [2024-11-19 21:20:25.750510] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.153 21:20:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:52.409 21:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:52.410 21:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:52.666 21:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:52.666 21:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:52.923 21:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:53.180 [2024-11-19 21:20:26.848751] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.180 21:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:53.438 21:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:53.438 21:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:53.438 21:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:53.438 21:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:54.812 Initializing NVMe Controllers 00:30:54.812 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:30:54.812 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:30:54.812 Initialization complete. Launching workers. 00:30:54.812 ======================================================== 00:30:54.812 Latency(us) 00:30:54.812 Device Information : IOPS MiB/s Average min max 00:30:54.812 PCIE (0000:88:00.0) NSID 1 from core 0: 74223.60 289.94 430.42 44.43 4425.60 00:30:54.812 ======================================================== 00:30:54.812 Total : 74223.60 289.94 430.42 44.43 4425.60 00:30:54.812 00:30:55.070 21:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:56.443 Initializing NVMe Controllers 00:30:56.443 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:56.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:56.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:56.443 Initialization complete. Launching workers. 00:30:56.443 ======================================================== 00:30:56.443 Latency(us) 00:30:56.443 Device Information : IOPS MiB/s Average min max 00:30:56.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 70.00 0.27 14550.74 201.55 45803.51 00:30:56.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.00 0.18 21900.88 7918.42 50898.57 00:30:56.443 ======================================================== 00:30:56.443 Total : 116.00 0.45 17465.45 201.55 50898.57 00:30:56.443 00:30:56.443 21:20:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:57.818 Initializing NVMe Controllers 00:30:57.818 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:57.818 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:57.818 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:57.818 Initialization complete. Launching workers. 00:30:57.818 ======================================================== 00:30:57.818 Latency(us) 00:30:57.818 Device Information : IOPS MiB/s Average min max 00:30:57.818 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5266.99 20.57 6099.87 933.15 12506.36 00:30:57.818 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3848.00 15.03 8359.86 5901.88 16570.70 00:30:57.818 ======================================================== 00:30:57.818 Total : 9114.99 35.61 7053.95 933.15 16570.70 00:30:57.818 00:30:57.818 21:20:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:57.818 21:20:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:57.818 21:20:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:01.098 Initializing NVMe Controllers 00:31:01.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:01.098 Controller IO queue size 128, less than required. 00:31:01.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:01.098 Controller IO queue size 128, less than required. 00:31:01.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:01.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:01.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:01.098 Initialization complete. Launching workers. 00:31:01.098 ======================================================== 00:31:01.098 Latency(us) 00:31:01.098 Device Information : IOPS MiB/s Average min max 00:31:01.098 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1204.49 301.12 108137.19 56660.42 282911.15 00:31:01.098 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 547.49 136.87 257733.08 134566.57 514762.58 00:31:01.098 ======================================================== 00:31:01.098 Total : 1751.98 437.99 154885.91 56660.42 514762.58 00:31:01.098 00:31:01.098 21:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:01.098 No valid NVMe controllers or AIO or URING devices found 00:31:01.098 Initializing NVMe Controllers 00:31:01.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:01.098 Controller IO queue size 128, less than required. 00:31:01.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:01.098 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:01.098 Controller IO queue size 128, less than required. 00:31:01.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:01.098 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:01.098 WARNING: Some requested NVMe devices were skipped 00:31:01.098 21:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:04.378 Initializing NVMe Controllers 00:31:04.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:04.378 Controller IO queue size 128, less than required. 00:31:04.378 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:04.378 Controller IO queue size 128, less than required. 00:31:04.378 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:04.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:04.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:04.378 Initialization complete. Launching workers. 00:31:04.378 00:31:04.378 ==================== 00:31:04.378 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:04.378 TCP transport: 00:31:04.378 polls: 5563 00:31:04.378 idle_polls: 3100 00:31:04.378 sock_completions: 2463 00:31:04.378 nvme_completions: 4903 00:31:04.378 submitted_requests: 7438 00:31:04.378 queued_requests: 1 00:31:04.378 00:31:04.378 ==================== 00:31:04.378 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:04.378 TCP transport: 00:31:04.378 polls: 8575 00:31:04.378 idle_polls: 6020 00:31:04.378 sock_completions: 2555 00:31:04.378 nvme_completions: 4941 00:31:04.378 submitted_requests: 7448 00:31:04.378 queued_requests: 1 00:31:04.378 ======================================================== 00:31:04.378 Latency(us) 00:31:04.378 Device Information : IOPS MiB/s Average min max 00:31:04.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1225.17 306.29 108911.61 61460.56 300211.94 00:31:04.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1234.66 308.67 109469.02 57428.23 419428.56 00:31:04.378 ======================================================== 00:31:04.378 Total : 2459.83 614.96 109191.40 57428.23 419428.56 00:31:04.378 00:31:04.378 21:20:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:04.378 21:20:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:04.378 21:20:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:04.378 21:20:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:31:04.378 21:20:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:07.696 21:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=b680c3d9-9724-4425-872e-8e2683bb3a09 00:31:07.696 21:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb b680c3d9-9724-4425-872e-8e2683bb3a09 00:31:07.696 21:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=b680c3d9-9724-4425-872e-8e2683bb3a09 00:31:07.696 21:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:07.696 21:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:07.696 21:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:07.696 21:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:07.983 21:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:07.983 { 00:31:07.983 "uuid": "b680c3d9-9724-4425-872e-8e2683bb3a09", 00:31:07.983 "name": "lvs_0", 00:31:07.983 "base_bdev": "Nvme0n1", 00:31:07.983 "total_data_clusters": 238234, 00:31:07.983 "free_clusters": 238234, 00:31:07.983 "block_size": 512, 00:31:07.983 "cluster_size": 4194304 00:31:07.983 } 00:31:07.983 ]' 00:31:07.983 21:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="b680c3d9-9724-4425-872e-8e2683bb3a09") .free_clusters' 00:31:07.983 21:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:31:07.983 21:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="b680c3d9-9724-4425-872e-8e2683bb3a09") .cluster_size' 00:31:07.983 21:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:07.983 21:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:31:07.983 21:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:31:07.983 952936 00:31:07.983 21:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:31:07.983 21:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:07.983 21:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b680c3d9-9724-4425-872e-8e2683bb3a09 lbd_0 20480 00:31:08.241 21:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=ed25dc7b-8bc2-4382-88e5-694c337fc5b3 00:31:08.241 21:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore ed25dc7b-8bc2-4382-88e5-694c337fc5b3 lvs_n_0 00:31:09.172 21:20:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=2aee0ae5-68bc-472e-8151-083d06c9bbe7 00:31:09.172 21:20:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 2aee0ae5-68bc-472e-8151-083d06c9bbe7 00:31:09.172 21:20:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=2aee0ae5-68bc-472e-8151-083d06c9bbe7 00:31:09.172 21:20:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:09.172 21:20:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:09.172 21:20:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:09.172 21:20:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:09.737 21:20:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:09.737 { 00:31:09.737 "uuid": "b680c3d9-9724-4425-872e-8e2683bb3a09", 00:31:09.737 "name": "lvs_0", 00:31:09.737 "base_bdev": "Nvme0n1", 00:31:09.737 "total_data_clusters": 238234, 00:31:09.737 "free_clusters": 233114, 00:31:09.737 "block_size": 512, 00:31:09.737 "cluster_size": 4194304 00:31:09.737 }, 00:31:09.737 { 00:31:09.737 "uuid": "2aee0ae5-68bc-472e-8151-083d06c9bbe7", 00:31:09.737 "name": "lvs_n_0", 00:31:09.737 "base_bdev": "ed25dc7b-8bc2-4382-88e5-694c337fc5b3", 00:31:09.737 "total_data_clusters": 5114, 00:31:09.737 "free_clusters": 5114, 00:31:09.737 "block_size": 512, 00:31:09.737 "cluster_size": 4194304 00:31:09.737 } 00:31:09.737 ]' 00:31:09.737 21:20:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="2aee0ae5-68bc-472e-8151-083d06c9bbe7") .free_clusters' 00:31:09.737 21:20:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:31:09.737 21:20:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="2aee0ae5-68bc-472e-8151-083d06c9bbe7") .cluster_size' 00:31:09.737 21:20:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:09.737 21:20:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:31:09.737 21:20:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:31:09.737 20456 00:31:09.737 21:20:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:09.737 21:20:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2aee0ae5-68bc-472e-8151-083d06c9bbe7 lbd_nest_0 20456 00:31:09.995 21:20:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=697a6849-5a54-4e7c-a39d-dd9bd8a56c08 00:31:09.995 21:20:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:10.252 21:20:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:10.252 21:20:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 697a6849-5a54-4e7c-a39d-dd9bd8a56c08 00:31:10.511 21:20:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:10.769 21:20:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:10.769 21:20:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:10.769 21:20:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:10.769 21:20:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:10.769 21:20:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:22.966 Initializing NVMe Controllers 00:31:22.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:22.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:22.966 Initialization complete. Launching workers. 00:31:22.966 ======================================================== 00:31:22.966 Latency(us) 00:31:22.966 Device Information : IOPS MiB/s Average min max 00:31:22.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.50 0.02 21118.26 239.49 48151.21 00:31:22.966 ======================================================== 00:31:22.966 Total : 47.50 0.02 21118.26 239.49 48151.21 00:31:22.966 00:31:22.966 21:20:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:22.966 21:20:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:32.931 Initializing NVMe Controllers 00:31:32.931 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:32.931 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:32.931 Initialization complete. Launching workers. 00:31:32.931 ======================================================== 00:31:32.931 Latency(us) 00:31:32.931 Device Information : IOPS MiB/s Average min max 00:31:32.931 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 78.30 9.79 12778.38 6791.61 47904.36 00:31:32.931 ======================================================== 00:31:32.931 Total : 78.30 9.79 12778.38 6791.61 47904.36 00:31:32.931 00:31:32.931 21:21:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:32.931 21:21:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:32.931 21:21:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:42.901 Initializing NVMe Controllers 00:31:42.901 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:42.901 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:42.901 Initialization complete. Launching workers. 00:31:42.901 ======================================================== 00:31:42.901 Latency(us) 00:31:42.901 Device Information : IOPS MiB/s Average min max 00:31:42.901 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4674.33 2.28 6844.82 604.90 16197.56 00:31:42.901 ======================================================== 00:31:42.901 Total : 4674.33 2.28 6844.82 604.90 16197.56 00:31:42.901 00:31:42.901 21:21:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:42.901 21:21:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:52.869 Initializing NVMe Controllers 00:31:52.869 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:52.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:52.869 Initialization complete. Launching workers. 00:31:52.869 ======================================================== 00:31:52.869 Latency(us) 00:31:52.869 Device Information : IOPS MiB/s Average min max 00:31:52.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3504.68 438.09 9133.54 1279.95 21977.91 00:31:52.869 ======================================================== 00:31:52.869 Total : 3504.68 438.09 9133.54 1279.95 21977.91 00:31:52.869 00:31:52.869 21:21:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:52.869 21:21:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:52.869 21:21:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:02.858 Initializing NVMe Controllers 00:32:02.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:02.858 Controller IO queue size 128, less than required. 00:32:02.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:02.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:02.858 Initialization complete. Launching workers. 00:32:02.858 ======================================================== 00:32:02.858 Latency(us) 00:32:02.858 Device Information : IOPS MiB/s Average min max 00:32:02.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8301.69 4.05 15438.95 1841.28 33070.46 00:32:02.859 ======================================================== 00:32:02.859 Total : 8301.69 4.05 15438.95 1841.28 33070.46 00:32:02.859 00:32:02.859 21:21:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:02.859 21:21:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:15.056 Initializing NVMe Controllers 00:32:15.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:15.056 Controller IO queue size 128, less than required. 00:32:15.056 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:15.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:15.057 Initialization complete. Launching workers. 00:32:15.057 ======================================================== 00:32:15.057 Latency(us) 00:32:15.057 Device Information : IOPS MiB/s Average min max 00:32:15.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1168.30 146.04 109824.39 23067.07 236586.12 00:32:15.057 ======================================================== 00:32:15.057 Total : 1168.30 146.04 109824.39 23067.07 236586.12 00:32:15.057 00:32:15.057 21:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:15.057 21:21:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 697a6849-5a54-4e7c-a39d-dd9bd8a56c08 00:32:15.057 21:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:15.057 21:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ed25dc7b-8bc2-4382-88e5-694c337fc5b3 00:32:15.057 21:21:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:15.314 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:15.315 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:15.315 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:15.315 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:32:15.315 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:15.315 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:32:15.315 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:15.315 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:15.315 rmmod nvme_tcp 00:32:15.315 rmmod nvme_fabrics 00:32:15.315 rmmod nvme_keyring 00:32:15.315 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:15.315 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:32:15.315 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:32:15.315 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3103682 ']' 00:32:15.315 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3103682 00:32:15.315 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3103682 ']' 00:32:15.315 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3103682 00:32:15.315 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:32:15.315 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:15.315 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3103682 00:32:15.581 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:15.581 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:15.581 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3103682' 00:32:15.581 killing process with pid 3103682 00:32:15.581 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3103682 00:32:15.581 21:21:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3103682 00:32:18.114 21:21:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:18.114 21:21:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:18.114 21:21:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:18.114 21:21:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:32:18.114 21:21:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:32:18.114 21:21:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:18.114 21:21:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:32:18.114 21:21:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:18.114 21:21:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:18.114 21:21:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.114 21:21:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:18.114 21:21:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:20.019 00:32:20.019 real 1m35.386s 00:32:20.019 user 5m53.145s 00:32:20.019 sys 0m15.777s 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:20.019 ************************************ 00:32:20.019 END TEST nvmf_perf 00:32:20.019 ************************************ 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.019 ************************************ 00:32:20.019 START TEST nvmf_fio_host 00:32:20.019 ************************************ 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:20.019 * Looking for test storage... 00:32:20.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:20.019 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:20.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.020 --rc genhtml_branch_coverage=1 00:32:20.020 --rc genhtml_function_coverage=1 00:32:20.020 --rc genhtml_legend=1 00:32:20.020 --rc geninfo_all_blocks=1 00:32:20.020 --rc geninfo_unexecuted_blocks=1 00:32:20.020 00:32:20.020 ' 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:20.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.020 --rc genhtml_branch_coverage=1 00:32:20.020 --rc genhtml_function_coverage=1 00:32:20.020 --rc genhtml_legend=1 00:32:20.020 --rc geninfo_all_blocks=1 00:32:20.020 --rc geninfo_unexecuted_blocks=1 00:32:20.020 00:32:20.020 ' 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:20.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.020 --rc genhtml_branch_coverage=1 00:32:20.020 --rc genhtml_function_coverage=1 00:32:20.020 --rc genhtml_legend=1 00:32:20.020 --rc geninfo_all_blocks=1 00:32:20.020 --rc geninfo_unexecuted_blocks=1 00:32:20.020 00:32:20.020 ' 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:20.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.020 --rc genhtml_branch_coverage=1 00:32:20.020 --rc genhtml_function_coverage=1 00:32:20.020 --rc genhtml_legend=1 00:32:20.020 --rc geninfo_all_blocks=1 00:32:20.020 --rc geninfo_unexecuted_blocks=1 00:32:20.020 00:32:20.020 ' 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.020 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:20.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:20.021 21:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:22.552 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:22.552 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:22.552 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:22.552 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:22.552 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:22.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:22.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:32:22.553 00:32:22.553 --- 10.0.0.2 ping statistics --- 00:32:22.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.553 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:22.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:22.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:32:22.553 00:32:22.553 --- 10.0.0.1 ping statistics --- 00:32:22.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.553 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3116698 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3116698 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3116698 ']' 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:22.553 21:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.553 [2024-11-19 21:21:55.994490] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:32:22.553 [2024-11-19 21:21:55.994636] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:22.553 [2024-11-19 21:21:56.138733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:22.553 [2024-11-19 21:21:56.277127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:22.553 [2024-11-19 21:21:56.277194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:22.553 [2024-11-19 21:21:56.277216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:22.553 [2024-11-19 21:21:56.277237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:22.553 [2024-11-19 21:21:56.277254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:22.553 [2024-11-19 21:21:56.283130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:22.553 [2024-11-19 21:21:56.283173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:22.553 [2024-11-19 21:21:56.283198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.553 [2024-11-19 21:21:56.283202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:23.486 21:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:23.486 21:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:32:23.486 21:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:23.744 [2024-11-19 21:21:57.314301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:23.744 21:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:23.744 21:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:23.744 21:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.744 21:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:24.018 Malloc1 00:32:24.018 21:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:24.334 21:21:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:24.612 21:21:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:24.870 [2024-11-19 21:21:58.515265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:24.870 21:21:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:25.127 21:21:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:25.127 21:21:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:25.127 21:21:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:25.127 21:21:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:25.127 21:21:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:25.127 21:21:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:25.127 21:21:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:25.127 21:21:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:25.127 21:21:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:25.127 21:21:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:25.127 21:21:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:25.127 21:21:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:25.127 21:21:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:25.127 21:21:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:25.128 21:21:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:25.128 21:21:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:25.128 21:21:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:25.128 21:21:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:25.385 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:25.385 fio-3.35 00:32:25.385 Starting 1 thread 00:32:27.910 00:32:27.910 test: (groupid=0, jobs=1): err= 0: pid=3117279: Tue Nov 19 21:22:01 2024 00:32:27.910 read: IOPS=5769, BW=22.5MiB/s (23.6MB/s)(45.3MiB/2009msec) 00:32:27.910 slat (usec): min=3, max=135, avg= 4.01, stdev= 2.31 00:32:27.910 clat (usec): min=3796, max=20110, avg=12039.90, stdev=1069.28 00:32:27.910 lat (usec): min=3829, max=20114, avg=12043.91, stdev=1069.20 00:32:27.910 clat percentiles (usec): 00:32:27.910 | 1.00th=[ 9765], 5.00th=[10421], 10.00th=[10814], 20.00th=[11207], 00:32:27.910 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:32:27.910 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13304], 95.00th=[13698], 00:32:27.910 | 99.00th=[14353], 99.50th=[14746], 99.90th=[18220], 99.95th=[19268], 00:32:27.910 | 99.99th=[20055] 00:32:27.910 bw ( KiB/s): min=22128, max=23872, per=99.83%, avg=23038.00, stdev=735.60, samples=4 00:32:27.910 iops : min= 5532, max= 5968, avg=5759.50, stdev=183.90, samples=4 00:32:27.910 write: IOPS=5754, BW=22.5MiB/s (23.6MB/s)(45.2MiB/2009msec); 0 zone resets 00:32:27.910 slat (usec): min=3, max=116, avg= 4.21, stdev= 2.04 00:32:27.910 clat (usec): min=1385, max=19262, avg=10062.86, stdev=894.99 00:32:27.910 lat (usec): min=1398, max=19266, avg=10067.07, stdev=895.04 00:32:27.910 clat percentiles (usec): 00:32:27.910 | 1.00th=[ 8225], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:32:27.910 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:32:27.910 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11338], 00:32:27.910 | 99.00th=[11994], 99.50th=[12387], 99.90th=[17433], 99.95th=[18220], 00:32:27.910 | 99.99th=[19268] 00:32:27.910 bw ( KiB/s): min=22336, max=23480, per=99.97%, avg=23012.00, stdev=484.09, samples=4 00:32:27.910 iops : min= 5584, max= 5870, avg=5753.00, stdev=121.02, samples=4 00:32:27.910 lat (msec) : 2=0.01%, 4=0.07%, 10=24.69%, 20=75.22%, 50=0.01% 00:32:27.910 cpu : usr=68.48%, sys=30.03%, ctx=60, majf=0, minf=1546 00:32:27.910 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:32:27.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:27.910 issued rwts: total=11591,11561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.910 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:27.910 00:32:27.910 Run status group 0 (all jobs): 00:32:27.910 READ: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.3MiB (47.5MB), run=2009-2009msec 00:32:27.910 WRITE: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.2MiB (47.4MB), run=2009-2009msec 00:32:28.475 ----------------------------------------------------- 00:32:28.475 Suppressions used: 00:32:28.475 count bytes template 00:32:28.475 1 57 /usr/src/fio/parse.c 00:32:28.475 1 8 libtcmalloc_minimal.so 00:32:28.475 ----------------------------------------------------- 00:32:28.475 00:32:28.475 21:22:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:28.475 21:22:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:28.475 21:22:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:28.475 21:22:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:28.475 21:22:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:28.475 21:22:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:28.475 21:22:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:28.475 21:22:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:28.475 21:22:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:28.475 21:22:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:28.475 21:22:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:28.475 21:22:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:28.475 21:22:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:28.475 21:22:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:28.475 21:22:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:28.475 21:22:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:28.475 21:22:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:28.475 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:28.475 fio-3.35 00:32:28.475 Starting 1 thread 00:32:31.004 00:32:31.004 test: (groupid=0, jobs=1): err= 0: pid=3117615: Tue Nov 19 21:22:04 2024 00:32:31.004 read: IOPS=6158, BW=96.2MiB/s (101MB/s)(193MiB/2008msec) 00:32:31.004 slat (usec): min=3, max=110, avg= 5.34, stdev= 2.19 00:32:31.004 clat (usec): min=3644, max=22348, avg=11968.52, stdev=2912.12 00:32:31.004 lat (usec): min=3648, max=22353, avg=11973.87, stdev=2912.16 00:32:31.004 clat percentiles (usec): 00:32:31.004 | 1.00th=[ 6259], 5.00th=[ 7504], 10.00th=[ 8225], 20.00th=[ 9503], 00:32:31.004 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11731], 60.00th=[12518], 00:32:31.004 | 70.00th=[13304], 80.00th=[14353], 90.00th=[15795], 95.00th=[16909], 00:32:31.004 | 99.00th=[19530], 99.50th=[20055], 99.90th=[21890], 99.95th=[21890], 00:32:31.004 | 99.99th=[22152] 00:32:31.004 bw ( KiB/s): min=43072, max=53984, per=49.47%, avg=48744.00, stdev=5607.55, samples=4 00:32:31.004 iops : min= 2692, max= 3374, avg=3046.50, stdev=350.47, samples=4 00:32:31.004 write: IOPS=3476, BW=54.3MiB/s (57.0MB/s)(99.6MiB/1834msec); 0 zone resets 00:32:31.004 slat (usec): min=33, max=163, avg=37.04, stdev= 6.26 00:32:31.004 clat (usec): min=6828, max=27751, avg=15862.35, stdev=2825.42 00:32:31.004 lat (usec): min=6863, max=27792, avg=15899.39, stdev=2825.25 00:32:31.004 clat percentiles (usec): 00:32:31.004 | 1.00th=[10945], 5.00th=[11994], 10.00th=[12518], 20.00th=[13304], 00:32:31.004 | 30.00th=[14091], 40.00th=[14877], 50.00th=[15664], 60.00th=[16319], 00:32:31.004 | 70.00th=[16909], 80.00th=[17957], 90.00th=[19530], 95.00th=[21103], 00:32:31.004 | 99.00th=[23987], 99.50th=[26084], 99.90th=[27395], 99.95th=[27657], 00:32:31.004 | 99.99th=[27657] 00:32:31.004 bw ( KiB/s): min=45088, max=56288, per=91.18%, avg=50720.00, stdev=5400.77, samples=4 00:32:31.004 iops : min= 2818, max= 3518, avg=3170.00, stdev=337.55, samples=4 00:32:31.004 lat (msec) : 4=0.06%, 10=17.18%, 20=79.57%, 50=3.19% 00:32:31.004 cpu : usr=77.54%, sys=21.12%, ctx=50, majf=0, minf=2113 00:32:31.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:32:31.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:31.004 issued rwts: total=12366,6376,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:31.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:31.004 00:32:31.004 Run status group 0 (all jobs): 00:32:31.004 READ: bw=96.2MiB/s (101MB/s), 96.2MiB/s-96.2MiB/s (101MB/s-101MB/s), io=193MiB (203MB), run=2008-2008msec 00:32:31.004 WRITE: bw=54.3MiB/s (57.0MB/s), 54.3MiB/s-54.3MiB/s (57.0MB/s-57.0MB/s), io=99.6MiB (104MB), run=1834-1834msec 00:32:31.262 ----------------------------------------------------- 00:32:31.262 Suppressions used: 00:32:31.262 count bytes template 00:32:31.262 1 57 /usr/src/fio/parse.c 00:32:31.262 130 12480 /usr/src/fio/iolog.c 00:32:31.262 1 8 libtcmalloc_minimal.so 00:32:31.262 ----------------------------------------------------- 00:32:31.262 00:32:31.262 21:22:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:31.520 21:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:31.520 21:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:31.520 21:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:31.520 21:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:31.520 21:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:32:31.520 21:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:31.520 21:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:31.520 21:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:31.520 21:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:31.520 21:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:32:31.520 21:22:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:32:34.801 Nvme0n1 00:32:34.801 21:22:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:38.084 21:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=1ccbf242-c85f-4403-906d-c0a017ea4289 00:32:38.084 21:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 1ccbf242-c85f-4403-906d-c0a017ea4289 00:32:38.084 21:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=1ccbf242-c85f-4403-906d-c0a017ea4289 00:32:38.084 21:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:38.084 21:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:38.084 21:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:38.084 21:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:38.084 21:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:38.084 { 00:32:38.084 "uuid": "1ccbf242-c85f-4403-906d-c0a017ea4289", 00:32:38.084 "name": "lvs_0", 00:32:38.084 "base_bdev": "Nvme0n1", 00:32:38.084 "total_data_clusters": 930, 00:32:38.084 "free_clusters": 930, 00:32:38.084 "block_size": 512, 00:32:38.084 "cluster_size": 1073741824 00:32:38.084 } 00:32:38.084 ]' 00:32:38.084 21:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="1ccbf242-c85f-4403-906d-c0a017ea4289") .free_clusters' 00:32:38.084 21:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:32:38.084 21:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="1ccbf242-c85f-4403-906d-c0a017ea4289") .cluster_size' 00:32:38.084 21:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:32:38.084 21:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:32:38.084 21:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:32:38.084 952320 00:32:38.085 21:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:38.343 77605aa4-5e36-4075-83a4-e79de4dc7013 00:32:38.343 21:22:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:38.601 21:22:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:38.859 21:22:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:39.424 21:22:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:39.424 21:22:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:39.424 21:22:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:39.424 21:22:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:39.424 21:22:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:39.424 21:22:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:39.424 21:22:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:39.424 21:22:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:39.424 21:22:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:39.424 21:22:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:39.424 21:22:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:39.424 21:22:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:39.424 21:22:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:39.424 21:22:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:39.424 21:22:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:39.424 21:22:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:39.424 21:22:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:39.424 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:39.424 fio-3.35 00:32:39.424 Starting 1 thread 00:32:41.953 00:32:41.954 test: (groupid=0, jobs=1): err= 0: pid=3119012: Tue Nov 19 21:22:15 2024 00:32:41.954 read: IOPS=4399, BW=17.2MiB/s (18.0MB/s)(34.6MiB/2011msec) 00:32:41.954 slat (usec): min=3, max=202, avg= 4.11, stdev= 3.28 00:32:41.954 clat (usec): min=884, max=172388, avg=15731.67, stdev=13142.92 00:32:41.954 lat (usec): min=889, max=172452, avg=15735.79, stdev=13143.49 00:32:41.954 clat percentiles (msec): 00:32:41.954 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:32:41.954 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 16], 00:32:41.954 | 70.00th=[ 16], 80.00th=[ 16], 90.00th=[ 17], 95.00th=[ 17], 00:32:41.954 | 99.00th=[ 20], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 174], 00:32:41.954 | 99.99th=[ 174] 00:32:41.954 bw ( KiB/s): min=12247, max=19400, per=99.78%, avg=17559.75, stdev=3542.90, samples=4 00:32:41.954 iops : min= 3061, max= 4850, avg=4389.75, stdev=886.10, samples=4 00:32:41.954 write: IOPS=4401, BW=17.2MiB/s (18.0MB/s)(34.6MiB/2011msec); 0 zone resets 00:32:41.954 slat (usec): min=3, max=158, avg= 4.24, stdev= 2.46 00:32:41.954 clat (usec): min=596, max=169686, avg=13065.39, stdev=12361.65 00:32:41.954 lat (usec): min=601, max=169697, avg=13069.63, stdev=12362.24 00:32:41.954 clat percentiles (msec): 00:32:41.954 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:32:41.954 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 13], 00:32:41.954 | 70.00th=[ 13], 80.00th=[ 14], 90.00th=[ 14], 95.00th=[ 14], 00:32:41.954 | 99.00th=[ 17], 99.50th=[ 159], 99.90th=[ 169], 99.95th=[ 169], 00:32:41.954 | 99.99th=[ 169] 00:32:41.954 bw ( KiB/s): min=12950, max=19384, per=99.79%, avg=17571.50, stdev=3087.11, samples=4 00:32:41.954 iops : min= 3237, max= 4846, avg=4392.75, stdev=772.03, samples=4 00:32:41.954 lat (usec) : 750=0.02%, 1000=0.01% 00:32:41.954 lat (msec) : 2=0.02%, 4=0.08%, 10=1.80%, 20=97.19%, 50=0.15% 00:32:41.954 lat (msec) : 250=0.72% 00:32:41.954 cpu : usr=61.99%, sys=36.67%, ctx=70, majf=0, minf=1543 00:32:41.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:41.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:41.954 issued rwts: total=8847,8852,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:41.954 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:41.954 00:32:41.954 Run status group 0 (all jobs): 00:32:41.954 READ: bw=17.2MiB/s (18.0MB/s), 17.2MiB/s-17.2MiB/s (18.0MB/s-18.0MB/s), io=34.6MiB (36.2MB), run=2011-2011msec 00:32:41.954 WRITE: bw=17.2MiB/s (18.0MB/s), 17.2MiB/s-17.2MiB/s (18.0MB/s-18.0MB/s), io=34.6MiB (36.3MB), run=2011-2011msec 00:32:42.212 ----------------------------------------------------- 00:32:42.212 Suppressions used: 00:32:42.212 count bytes template 00:32:42.212 1 58 /usr/src/fio/parse.c 00:32:42.212 1 8 libtcmalloc_minimal.so 00:32:42.212 ----------------------------------------------------- 00:32:42.212 00:32:42.212 21:22:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:42.470 21:22:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:43.842 21:22:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=66d43673-ac40-488f-a4a6-c4ffee6a2df7 00:32:43.842 21:22:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 66d43673-ac40-488f-a4a6-c4ffee6a2df7 00:32:43.842 21:22:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=66d43673-ac40-488f-a4a6-c4ffee6a2df7 00:32:43.842 21:22:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:43.842 21:22:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:43.842 21:22:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:43.842 21:22:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:44.099 21:22:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:44.100 { 00:32:44.100 "uuid": "1ccbf242-c85f-4403-906d-c0a017ea4289", 00:32:44.100 "name": "lvs_0", 00:32:44.100 "base_bdev": "Nvme0n1", 00:32:44.100 "total_data_clusters": 930, 00:32:44.100 "free_clusters": 0, 00:32:44.100 "block_size": 512, 00:32:44.100 "cluster_size": 1073741824 00:32:44.100 }, 00:32:44.100 { 00:32:44.100 "uuid": "66d43673-ac40-488f-a4a6-c4ffee6a2df7", 00:32:44.100 "name": "lvs_n_0", 00:32:44.100 "base_bdev": "77605aa4-5e36-4075-83a4-e79de4dc7013", 00:32:44.100 "total_data_clusters": 237847, 00:32:44.100 "free_clusters": 237847, 00:32:44.100 "block_size": 512, 00:32:44.100 "cluster_size": 4194304 00:32:44.100 } 00:32:44.100 ]' 00:32:44.100 21:22:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="66d43673-ac40-488f-a4a6-c4ffee6a2df7") .free_clusters' 00:32:44.100 21:22:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:32:44.100 21:22:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="66d43673-ac40-488f-a4a6-c4ffee6a2df7") .cluster_size' 00:32:44.100 21:22:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:32:44.100 21:22:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:32:44.100 21:22:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:32:44.100 951388 00:32:44.100 21:22:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:45.474 efe447ae-e05a-4003-af0b-c16912e156b8 00:32:45.474 21:22:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:45.474 21:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:45.731 21:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:45.989 21:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:45.989 21:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:45.989 21:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:45.989 21:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:45.989 21:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:45.989 21:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:45.989 21:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:45.989 21:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:45.989 21:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:45.989 21:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:45.989 21:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:45.989 21:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:45.989 21:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:45.989 21:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:45.989 21:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:45.989 21:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:45.989 21:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:46.247 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:46.247 fio-3.35 00:32:46.247 Starting 1 thread 00:32:48.775 00:32:48.775 test: (groupid=0, jobs=1): err= 0: pid=3119872: Tue Nov 19 21:22:22 2024 00:32:48.775 read: IOPS=4376, BW=17.1MiB/s (17.9MB/s)(34.4MiB/2011msec) 00:32:48.775 slat (usec): min=3, max=138, avg= 3.92, stdev= 2.46 00:32:48.775 clat (usec): min=6142, max=26289, avg=15974.70, stdev=1545.63 00:32:48.775 lat (usec): min=6150, max=26293, avg=15978.61, stdev=1545.57 00:32:48.775 clat percentiles (usec): 00:32:48.775 | 1.00th=[12387], 5.00th=[13566], 10.00th=[14222], 20.00th=[14746], 00:32:48.775 | 30.00th=[15270], 40.00th=[15533], 50.00th=[15926], 60.00th=[16319], 00:32:48.775 | 70.00th=[16712], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:32:48.775 | 99.00th=[19268], 99.50th=[20055], 99.90th=[25822], 99.95th=[26084], 00:32:48.775 | 99.99th=[26346] 00:32:48.775 bw ( KiB/s): min=16152, max=18256, per=99.75%, avg=17462.00, stdev=910.65, samples=4 00:32:48.775 iops : min= 4038, max= 4564, avg=4365.50, stdev=227.66, samples=4 00:32:48.775 write: IOPS=4373, BW=17.1MiB/s (17.9MB/s)(34.4MiB/2011msec); 0 zone resets 00:32:48.775 slat (usec): min=3, max=111, avg= 4.05, stdev= 1.91 00:32:48.775 clat (usec): min=2918, max=23424, avg=13127.52, stdev=1269.91 00:32:48.775 lat (usec): min=2925, max=23429, avg=13131.57, stdev=1269.86 00:32:48.775 clat percentiles (usec): 00:32:48.775 | 1.00th=[10290], 5.00th=[11207], 10.00th=[11731], 20.00th=[12256], 00:32:48.775 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:32:48.775 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14615], 95.00th=[15008], 00:32:48.775 | 99.00th=[15926], 99.50th=[16581], 99.90th=[21365], 99.95th=[23200], 00:32:48.775 | 99.99th=[23462] 00:32:48.775 bw ( KiB/s): min=17240, max=17600, per=99.90%, avg=17478.00, stdev=164.78, samples=4 00:32:48.775 iops : min= 4310, max= 4400, avg=4369.50, stdev=41.19, samples=4 00:32:48.775 lat (msec) : 4=0.02%, 10=0.50%, 20=99.13%, 50=0.35% 00:32:48.775 cpu : usr=69.00%, sys=29.70%, ctx=82, majf=0, minf=1543 00:32:48.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:48.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:48.775 issued rwts: total=8801,8796,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:48.775 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:48.775 00:32:48.775 Run status group 0 (all jobs): 00:32:48.775 READ: bw=17.1MiB/s (17.9MB/s), 17.1MiB/s-17.1MiB/s (17.9MB/s-17.9MB/s), io=34.4MiB (36.0MB), run=2011-2011msec 00:32:48.775 WRITE: bw=17.1MiB/s (17.9MB/s), 17.1MiB/s-17.1MiB/s (17.9MB/s-17.9MB/s), io=34.4MiB (36.0MB), run=2011-2011msec 00:32:48.775 ----------------------------------------------------- 00:32:48.775 Suppressions used: 00:32:48.775 count bytes template 00:32:48.775 1 58 /usr/src/fio/parse.c 00:32:48.775 1 8 libtcmalloc_minimal.so 00:32:48.775 ----------------------------------------------------- 00:32:48.775 00:32:49.033 21:22:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:49.290 21:22:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:49.290 21:22:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:53.472 21:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:53.730 21:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:57.010 21:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:57.010 21:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:58.909 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:58.909 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:58.909 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:58.909 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:58.909 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:58.909 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:58.909 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:58.909 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:58.909 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:58.909 rmmod nvme_tcp 00:32:58.909 rmmod nvme_fabrics 00:32:59.167 rmmod nvme_keyring 00:32:59.167 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:59.167 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:59.167 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:59.167 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3116698 ']' 00:32:59.167 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3116698 00:32:59.167 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3116698 ']' 00:32:59.167 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3116698 00:32:59.167 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:32:59.167 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:59.167 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3116698 00:32:59.167 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:59.167 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:59.167 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3116698' 00:32:59.167 killing process with pid 3116698 00:32:59.167 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3116698 00:32:59.167 21:22:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3116698 00:33:00.542 21:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:00.542 21:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:00.542 21:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:00.542 21:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:33:00.542 21:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:33:00.542 21:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:00.542 21:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:33:00.542 21:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:00.542 21:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:00.542 21:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.542 21:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:00.542 21:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.445 21:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:02.445 00:33:02.445 real 0m42.486s 00:33:02.445 user 2m41.961s 00:33:02.445 sys 0m8.768s 00:33:02.445 21:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:02.445 21:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.445 ************************************ 00:33:02.445 END TEST nvmf_fio_host 00:33:02.445 ************************************ 00:33:02.445 21:22:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:02.445 21:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:02.445 21:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:02.445 21:22:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.445 ************************************ 00:33:02.445 START TEST nvmf_failover 00:33:02.445 ************************************ 00:33:02.445 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:02.445 * Looking for test storage... 00:33:02.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:02.445 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:02.445 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:33:02.445 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:02.703 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:02.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.703 --rc genhtml_branch_coverage=1 00:33:02.703 --rc genhtml_function_coverage=1 00:33:02.704 --rc genhtml_legend=1 00:33:02.704 --rc geninfo_all_blocks=1 00:33:02.704 --rc geninfo_unexecuted_blocks=1 00:33:02.704 00:33:02.704 ' 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:02.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.704 --rc genhtml_branch_coverage=1 00:33:02.704 --rc genhtml_function_coverage=1 00:33:02.704 --rc genhtml_legend=1 00:33:02.704 --rc geninfo_all_blocks=1 00:33:02.704 --rc geninfo_unexecuted_blocks=1 00:33:02.704 00:33:02.704 ' 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:02.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.704 --rc genhtml_branch_coverage=1 00:33:02.704 --rc genhtml_function_coverage=1 00:33:02.704 --rc genhtml_legend=1 00:33:02.704 --rc geninfo_all_blocks=1 00:33:02.704 --rc geninfo_unexecuted_blocks=1 00:33:02.704 00:33:02.704 ' 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:02.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.704 --rc genhtml_branch_coverage=1 00:33:02.704 --rc genhtml_function_coverage=1 00:33:02.704 --rc genhtml_legend=1 00:33:02.704 --rc geninfo_all_blocks=1 00:33:02.704 --rc geninfo_unexecuted_blocks=1 00:33:02.704 00:33:02.704 ' 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:02.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:33:02.704 21:22:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:04.666 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:04.666 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:33:04.666 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:04.666 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:04.666 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:04.667 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:04.667 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:04.667 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:04.667 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:04.667 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:04.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:04.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:33:04.927 00:33:04.927 --- 10.0.0.2 ping statistics --- 00:33:04.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:04.927 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:04.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:04.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:33:04.927 00:33:04.927 --- 10.0.0.1 ping statistics --- 00:33:04.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:04.927 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3123385 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3123385 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3123385 ']' 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:04.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:04.927 21:22:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:04.927 [2024-11-19 21:22:38.657314] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:33:04.927 [2024-11-19 21:22:38.657465] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:05.185 [2024-11-19 21:22:38.807829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:05.185 [2024-11-19 21:22:38.949888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:05.185 [2024-11-19 21:22:38.949965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:05.185 [2024-11-19 21:22:38.949999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:05.185 [2024-11-19 21:22:38.950023] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:05.185 [2024-11-19 21:22:38.950044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:05.185 [2024-11-19 21:22:38.952705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:05.185 [2024-11-19 21:22:38.952814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.185 [2024-11-19 21:22:38.952820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:06.119 21:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:06.119 21:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:06.119 21:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:06.119 21:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:06.119 21:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:06.119 21:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:06.119 21:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:06.377 [2024-11-19 21:22:39.927871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:06.377 21:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:06.635 Malloc0 00:33:06.635 21:22:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:06.893 21:22:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:07.151 21:22:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:07.408 [2024-11-19 21:22:41.126153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:07.408 21:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:07.666 [2024-11-19 21:22:41.390977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:07.666 21:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:07.924 [2024-11-19 21:22:41.655868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:07.924 21:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3123797 00:33:07.924 21:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:07.924 21:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:07.924 21:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3123797 /var/tmp/bdevperf.sock 00:33:07.924 21:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3123797 ']' 00:33:07.924 21:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:07.924 21:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:07.924 21:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:07.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:07.924 21:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:07.924 21:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:09.295 21:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:09.295 21:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:09.295 21:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:09.552 NVMe0n1 00:33:09.552 21:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:10.118 00:33:10.118 21:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3124063 00:33:10.118 21:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:10.118 21:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:11.053 21:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:11.311 [2024-11-19 21:22:44.997766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.311 [2024-11-19 21:22:44.997848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.311 [2024-11-19 21:22:44.997876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.311 [2024-11-19 21:22:44.997894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.311 [2024-11-19 21:22:44.997911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.311 [2024-11-19 21:22:44.997928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.311 [2024-11-19 21:22:44.997946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.311 [2024-11-19 21:22:44.997962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.311 [2024-11-19 21:22:44.997995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.311 [2024-11-19 21:22:44.998011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.311 [2024-11-19 21:22:44.998043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.311 [2024-11-19 21:22:44.998063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.311 [2024-11-19 21:22:44.998092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.311 [2024-11-19 21:22:44.998110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.311 [2024-11-19 21:22:44.998128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.311 [2024-11-19 21:22:44.998146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.311 [2024-11-19 21:22:44.998164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.311 [2024-11-19 21:22:44.998193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.311 [2024-11-19 21:22:44.998211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.312 [2024-11-19 21:22:44.998229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.312 [2024-11-19 21:22:44.998246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.312 [2024-11-19 21:22:44.998263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.312 [2024-11-19 21:22:44.998280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.312 [2024-11-19 21:22:44.998297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.312 [2024-11-19 21:22:44.998314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.312 [2024-11-19 21:22:44.998331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.312 [2024-11-19 21:22:44.998348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.312 [2024-11-19 21:22:44.998381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.312 [2024-11-19 21:22:44.998398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.312 [2024-11-19 21:22:44.998415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:11.312 21:22:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:14.592 21:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:14.850 00:33:14.850 21:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:15.109 [2024-11-19 21:22:48.844433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:15.109 [2024-11-19 21:22:48.844525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:15.109 [2024-11-19 21:22:48.844547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:15.109 [2024-11-19 21:22:48.844565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:15.109 [2024-11-19 21:22:48.844582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:15.109 [2024-11-19 21:22:48.844599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:15.109 [2024-11-19 21:22:48.844617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:15.109 21:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:18.389 21:22:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:18.389 [2024-11-19 21:22:52.122240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.389 21:22:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:19.762 21:22:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:19.762 [2024-11-19 21:22:53.408810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.762 [2024-11-19 21:22:53.408907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.762 [2024-11-19 21:22:53.408931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.762 [2024-11-19 21:22:53.408956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.762 [2024-11-19 21:22:53.408976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.762 [2024-11-19 21:22:53.408995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.762 [2024-11-19 21:22:53.409014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.762 [2024-11-19 21:22:53.409032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.762 [2024-11-19 21:22:53.409051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.762 [2024-11-19 21:22:53.409079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.762 [2024-11-19 21:22:53.409101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 [2024-11-19 21:22:53.409551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:19.763 21:22:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3124063 00:33:26.333 { 00:33:26.333 "results": [ 00:33:26.333 { 00:33:26.333 "job": "NVMe0n1", 00:33:26.333 "core_mask": "0x1", 00:33:26.333 "workload": "verify", 00:33:26.333 "status": "finished", 00:33:26.333 "verify_range": { 00:33:26.333 "start": 0, 00:33:26.333 "length": 16384 00:33:26.333 }, 00:33:26.333 "queue_depth": 128, 00:33:26.333 "io_size": 4096, 00:33:26.333 "runtime": 15.013033, 00:33:26.333 "iops": 5969.746419660837, 00:33:26.333 "mibps": 23.319321951800145, 00:33:26.333 "io_failed": 12093, 00:33:26.333 "io_timeout": 0, 00:33:26.333 "avg_latency_us": 18857.468657520738, 00:33:26.333 "min_latency_us": 1104.4029629629629, 00:33:26.333 "max_latency_us": 55147.33037037037 00:33:26.333 } 00:33:26.333 ], 00:33:26.333 "core_count": 1 00:33:26.333 } 00:33:26.333 21:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3123797 00:33:26.333 21:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3123797 ']' 00:33:26.333 21:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3123797 00:33:26.333 21:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:26.333 21:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:26.333 21:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3123797 00:33:26.333 21:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:26.333 21:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:26.333 21:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3123797' 00:33:26.333 killing process with pid 3123797 00:33:26.333 21:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3123797 00:33:26.333 21:22:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3123797 00:33:26.333 21:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:26.333 [2024-11-19 21:22:41.762188] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:33:26.333 [2024-11-19 21:22:41.762348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3123797 ] 00:33:26.333 [2024-11-19 21:22:41.901305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.333 [2024-11-19 21:22:42.027005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.333 Running I/O for 15 seconds... 00:33:26.333 6172.00 IOPS, 24.11 MiB/s [2024-11-19T20:23:00.128Z] [2024-11-19 21:22:44.999581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.333 [2024-11-19 21:22:44.999637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.333 [2024-11-19 21:22:44.999696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.333 [2024-11-19 21:22:44.999720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.333 [2024-11-19 21:22:44.999747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.333 [2024-11-19 21:22:44.999769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.333 [2024-11-19 21:22:44.999793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.333 [2024-11-19 21:22:44.999816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.333 [2024-11-19 21:22:44.999839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.333 [2024-11-19 21:22:44.999860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.333 [2024-11-19 21:22:44.999883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.333 [2024-11-19 21:22:44.999903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.333 [2024-11-19 21:22:44.999926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.333 [2024-11-19 21:22:44.999948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.333 [2024-11-19 21:22:44.999970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.333 [2024-11-19 21:22:44.999991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.333 [2024-11-19 21:22:45.000014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.333 [2024-11-19 21:22:45.000035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.333 [2024-11-19 21:22:45.000067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.333 [2024-11-19 21:22:45.000113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.333 [2024-11-19 21:22:45.000139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.333 [2024-11-19 21:22:45.000161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.333 [2024-11-19 21:22:45.000193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.333 [2024-11-19 21:22:45.000215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.333 [2024-11-19 21:22:45.000257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.333 [2024-11-19 21:22:45.000279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.333 [2024-11-19 21:22:45.000303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.333 [2024-11-19 21:22:45.000324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.333 [2024-11-19 21:22:45.000347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.333 [2024-11-19 21:22:45.000368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.000412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.334 [2024-11-19 21:22:45.000433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.000456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.334 [2024-11-19 21:22:45.000476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.000499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.334 [2024-11-19 21:22:45.000520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.000543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.334 [2024-11-19 21:22:45.000563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.000586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.334 [2024-11-19 21:22:45.000606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.000629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.334 [2024-11-19 21:22:45.000649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.000672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.334 [2024-11-19 21:22:45.000692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.000715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.334 [2024-11-19 21:22:45.000736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.000759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.334 [2024-11-19 21:22:45.000784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.000808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.334 [2024-11-19 21:22:45.000829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.000852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.000873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.000896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:58136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.000917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.000939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.000959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.000982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.001003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.001026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.001047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.001076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.001116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.001142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.001163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.001186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.001207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.001231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.001252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.001275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.001297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.001319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.001341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.001370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.001408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.001431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.001451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.001474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.001494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.001517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.001537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.001560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.001580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.001603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.001624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.001646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.001667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.001689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.001710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.001732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.001753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.001775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.001795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.001819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.001839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.001862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.001882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.001905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.334 [2024-11-19 21:22:45.001932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.001957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.334 [2024-11-19 21:22:45.001978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.002001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.334 [2024-11-19 21:22:45.002022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.002045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.334 [2024-11-19 21:22:45.002066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.002116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.334 [2024-11-19 21:22:45.002138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.334 [2024-11-19 21:22:45.002162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.334 [2024-11-19 21:22:45.002183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.002207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.002228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.002252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.002273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.002297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.002318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.002342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.002363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.002402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.002423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.002446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.002466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.002490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.002511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.002533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.002558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.002583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.002604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.002626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.002647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.002669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.002690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.002713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.002734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.002756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.002776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.002800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.002820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.002842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.002863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.002886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.002908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.002931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.002952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.002974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.002995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.003018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.003054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.003089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.003129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.003158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.003181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.003205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.003227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.003265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.003287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.003311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.003333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.003358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.003380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.003403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.003440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.003463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.003484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.003507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.003529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.003552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.003574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.003596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.003618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.003642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.003663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.003686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:58024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.003707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.003732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.003757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.003781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.003803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.003826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.003847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.003870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:58056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.003891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.003914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.003935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.003958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.003979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.004002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.335 [2024-11-19 21:22:45.004023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.335 [2024-11-19 21:22:45.004045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.336 [2024-11-19 21:22:45.004092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.004122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.336 [2024-11-19 21:22:45.004144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.004168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.336 [2024-11-19 21:22:45.004190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.004214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.336 [2024-11-19 21:22:45.004236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.004261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.004283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.004306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.004328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.004356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.004404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.004428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.004450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.004473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.004495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.004519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.004540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.004563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.004584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.004607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.004628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.004651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.004672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.004696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.004716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.004739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.004760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.004784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.004805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.004828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.004849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.004873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.004895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.004919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.004940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.004967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.004989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.005013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.005034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.005058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.005103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.005130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.005152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.005176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.005198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.005223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.005246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.005270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.005292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.005316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.005338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.005362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.005399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.005423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.005444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.005467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.005488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.005511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.005532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.005555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.005581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.005606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.005628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.005651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.336 [2024-11-19 21:22:45.005672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.005714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.336 [2024-11-19 21:22:45.005737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.336 [2024-11-19 21:22:45.005756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58560 len:8 PRP1 0x0 PRP2 0x0 00:33:26.336 [2024-11-19 21:22:45.005777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.006046] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:26.336 [2024-11-19 21:22:45.006129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.336 [2024-11-19 21:22:45.006165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.006189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.336 [2024-11-19 21:22:45.006210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.006231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.336 [2024-11-19 21:22:45.006251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.336 [2024-11-19 21:22:45.006273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.337 [2024-11-19 21:22:45.006293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:45.006313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:33:26.337 [2024-11-19 21:22:45.006395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:26.337 [2024-11-19 21:22:45.010281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:26.337 [2024-11-19 21:22:45.212942] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:33:26.337 5462.50 IOPS, 21.34 MiB/s [2024-11-19T20:23:00.132Z] 5613.67 IOPS, 21.93 MiB/s [2024-11-19T20:23:00.132Z] 5694.00 IOPS, 22.24 MiB/s [2024-11-19T20:23:00.132Z] [2024-11-19 21:22:48.845588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.845650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.845707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.845752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.845792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.845815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.845839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.845860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.845883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.845905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.845928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.845949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.845972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.845995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.846019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.846042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.846066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.846113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.846140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.846175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.846199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.846221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.846257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.846280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.846305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.846328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.846352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.846374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.846413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.846440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.846465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.846488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.846510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.846531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.846555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.846578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.846601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.846622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.846645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.846667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.846689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.846710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.846733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.846755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.846778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.846799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.846822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.846843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.846867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.846888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.846912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.846933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.846956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.846977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.847009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.847033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.847057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.847104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.847131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.337 [2024-11-19 21:22:48.847154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.337 [2024-11-19 21:22:48.847177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.847200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.847224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.847246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.847270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.338 [2024-11-19 21:22:48.847292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.847317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.847339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.847364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.847402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.847425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.847446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.847471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.847492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.847515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.847536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.847560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.847582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.847605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.847626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.847654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.847676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.847700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.847722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.847745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.847765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.847789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.847810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.847833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.847854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.847877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.847898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.847921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.847943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.847966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.847987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.848010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.848031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.848055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.848100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.848128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.848151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.848175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.848197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.848221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.848247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.848273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.848295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.848320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.848343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.848367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.848404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.848428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.848450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.848473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.848494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.848518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.848538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.848561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.848582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.848606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.848627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.848651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.848672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.848695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.848717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.848740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.848761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.848785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.848807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.848835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.848870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.848896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.848918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.848942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.848963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.848986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.338 [2024-11-19 21:22:48.849007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.338 [2024-11-19 21:22:48.849031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.338 [2024-11-19 21:22:48.849052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.849097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.339 [2024-11-19 21:22:48.849122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.849147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.339 [2024-11-19 21:22:48.849170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.849194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.339 [2024-11-19 21:22:48.849217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.849241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.339 [2024-11-19 21:22:48.849263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.849288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.339 [2024-11-19 21:22:48.849309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.849335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.339 [2024-11-19 21:22:48.849358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.849397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.849420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.849444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.849470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.849495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.849516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.849538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.849559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.849584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.849605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.849629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.849651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.849674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.849695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.849719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.849740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.849763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.849784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.849808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.849830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.849853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.849875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.849898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.849919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.849942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.849964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.849996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.850019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.850042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.850067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.850116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.850139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.850163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.850185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.850209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.850231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.850255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.850277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.850301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.850322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.850353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.850375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.850416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.850437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.850460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.339 [2024-11-19 21:22:48.850481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.850536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.339 [2024-11-19 21:22:48.850562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24592 len:8 PRP1 0x0 PRP2 0x0 00:33:26.339 [2024-11-19 21:22:48.850583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.850674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.339 [2024-11-19 21:22:48.850703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.850729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.339 [2024-11-19 21:22:48.850749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.850770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.339 [2024-11-19 21:22:48.850796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.850818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.339 [2024-11-19 21:22:48.850839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.850859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2000 is same with the state(6) to be set 00:33:26.339 [2024-11-19 21:22:48.851217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.339 [2024-11-19 21:22:48.851246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.339 [2024-11-19 21:22:48.851266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24600 len:8 PRP1 0x0 PRP2 0x0 00:33:26.339 [2024-11-19 21:22:48.851288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.851321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.339 [2024-11-19 21:22:48.851341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.339 [2024-11-19 21:22:48.851359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:8 PRP1 0x0 PRP2 0x0 00:33:26.339 [2024-11-19 21:22:48.851379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.851414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.339 [2024-11-19 21:22:48.851433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.339 [2024-11-19 21:22:48.851450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24616 len:8 PRP1 0x0 PRP2 0x0 00:33:26.339 [2024-11-19 21:22:48.851469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.339 [2024-11-19 21:22:48.851489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.851506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.851530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24624 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.851549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.851569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.851586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.851603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24632 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.851621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.851641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.851657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.851674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.851693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.851712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.851728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.851746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24648 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.851769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.851790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.851808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.851825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24656 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.851843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.851868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.851886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.851903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24664 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.851922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.851942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.851958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.851976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.851994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.852013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.852030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.852047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24680 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.852066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.852113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.852131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.852151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24688 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.852170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.852191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.852222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.852241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23736 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.852261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.852282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.852299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.852317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23744 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.852337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.852358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.852395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.852414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23752 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.852434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.852454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.852470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.852488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23760 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.852507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.852529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.852546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.852563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23768 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.852582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.852602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.852618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.852635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.852654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.852674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.852691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.852708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23784 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.852727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.852747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.852763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.852781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23792 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.852799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.852819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.852836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.852853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23800 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.852871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.852891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.852907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.852924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.852942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.852966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.852984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.853001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23816 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.853020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.853040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.853056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.853093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23824 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.853115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.853136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.853154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.853171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23832 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.853190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.853210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.853227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.340 [2024-11-19 21:22:48.853245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:8 PRP1 0x0 PRP2 0x0 00:33:26.340 [2024-11-19 21:22:48.853264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.340 [2024-11-19 21:22:48.853284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.340 [2024-11-19 21:22:48.853301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.853318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23848 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.853337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.853358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.853375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.853408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23856 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.853428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.853448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.853464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.853480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23864 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.853499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.853518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.853535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.853552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.853575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.853595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.853612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.853630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23880 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.853649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.853669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.853686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.853703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23888 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.853721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.853741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.853758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.853775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23896 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.853793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.853813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.853830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.853852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.853872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.853892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.853908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.853925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23912 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.853944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.853963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.853979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.853996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23920 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.854015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.854036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.854052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.854075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23928 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.854114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.854135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.854152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.854177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.854198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.854219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.854237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.854256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23944 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.854275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.854296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.854313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.854331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23952 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.854350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.854371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.854402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.854419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23960 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.854438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.854458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.854475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.854498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.854518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.854538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.854555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.854572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23976 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.854591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.854611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.854627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.854650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23984 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.854669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.868969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.869018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.869039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23992 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.869086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.869121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.869145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.869165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.869185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.869205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.869223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.869241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24008 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.869259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.869280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.869296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.869314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24016 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.869358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.869379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.869396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.869428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24024 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.869447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.341 [2024-11-19 21:22:48.869466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.341 [2024-11-19 21:22:48.869482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.341 [2024-11-19 21:22:48.869500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:8 PRP1 0x0 PRP2 0x0 00:33:26.341 [2024-11-19 21:22:48.869518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.342 [2024-11-19 21:22:48.869538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.342 [2024-11-19 21:22:48.869553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.342 [2024-11-19 21:22:48.869570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24040 len:8 PRP1 0x0 PRP2 0x0 00:33:26.342 [2024-11-19 21:22:48.869587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.342 [2024-11-19 21:22:48.869606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.342 [2024-11-19 21:22:48.869622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.342 [2024-11-19 21:22:48.869640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24048 len:8 PRP1 0x0 PRP2 0x0 00:33:26.342 [2024-11-19 21:22:48.869659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.342 [2024-11-19 21:22:48.869678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.342 [2024-11-19 21:22:48.869695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.342 [2024-11-19 21:22:48.869712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24056 len:8 PRP1 0x0 PRP2 0x0 00:33:26.342 [2024-11-19 21:22:48.869731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.342 [2024-11-19 21:22:48.869755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.342 [2024-11-19 21:22:48.869772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.342 [2024-11-19 21:22:48.869789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:8 PRP1 0x0 PRP2 0x0 00:33:26.342 [2024-11-19 21:22:48.869807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.342 [2024-11-19 21:22:48.869826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.342 [2024-11-19 21:22:48.869842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.342 [2024-11-19 21:22:48.869859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24072 len:8 PRP1 0x0 PRP2 0x0 00:33:26.342 [2024-11-19 21:22:48.869878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.342 [2024-11-19 21:22:48.869897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.342 [2024-11-19 21:22:48.869914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.342 [2024-11-19 21:22:48.869931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24080 len:8 PRP1 0x0 PRP2 0x0 00:33:26.342 [2024-11-19 21:22:48.869949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.342 [2024-11-19 21:22:48.869969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.342 [2024-11-19 21:22:48.869985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.342 [2024-11-19 21:22:48.870002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24088 len:8 PRP1 0x0 PRP2 0x0 00:33:26.342 [2024-11-19 21:22:48.870020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.342 [2024-11-19 21:22:48.870039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.342 [2024-11-19 21:22:48.870078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.342 [2024-11-19 21:22:48.870100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:8 PRP1 0x0 PRP2 0x0 00:33:26.342 [2024-11-19 21:22:48.870129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.342 [2024-11-19 21:22:48.870150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.342 [2024-11-19 21:22:48.870167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.342 [2024-11-19 21:22:48.870185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24104 len:8 PRP1 0x0 PRP2 0x0 00:33:26.342 [2024-11-19 21:22:48.870204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.342 [2024-11-19 21:22:48.870224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.342 [2024-11-19 21:22:48.870241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.342 [2024-11-19 21:22:48.870259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24112 len:8 PRP1 0x0 PRP2 0x0 00:33:26.342 [2024-11-19 21:22:48.870278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.342 [2024-11-19 21:22:48.870300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.342 [2024-11-19 21:22:48.870317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.342 [2024-11-19 21:22:48.870335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23672 len:8 PRP1 0x0 PRP2 0x0 00:33:26.342 [2024-11-19 21:22:48.870384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.342 [2024-11-19 21:22:48.870405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.342 [2024-11-19 21:22:48.870422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.342 [2024-11-19 21:22:48.870440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24120 len:8 PRP1 0x0 PRP2 0x0 00:33:26.342 [2024-11-19 21:22:48.870458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.342 [2024-11-19 21:22:48.870476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.342 [2024-11-19 21:22:48.870492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.342 [2024-11-19 21:22:48.870509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:8 PRP1 0x0 PRP2 0x0 00:33:26.342 [2024-11-19 21:22:48.870528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.342 [2024-11-19 21:22:48.870547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.342 [2024-11-19 21:22:48.870563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.342 [2024-11-19 21:22:48.870580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24136 len:8 PRP1 0x0 PRP2 0x0 00:33:26.342 [2024-11-19 21:22:48.870598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.342 [2024-11-19 21:22:48.870617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.342 [2024-11-19 21:22:48.870634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.342 [2024-11-19 21:22:48.870650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24144 len:8 PRP1 0x0 PRP2 0x0 00:33:26.342 [2024-11-19 21:22:48.870668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.342 [2024-11-19 21:22:48.870686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.342 [2024-11-19 21:22:48.870703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.342 [2024-11-19 21:22:48.870720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24152 len:8 PRP1 0x0 PRP2 0x0 00:33:26.342 [2024-11-19 21:22:48.870739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.342 [2024-11-19 21:22:48.870757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.342 [2024-11-19 21:22:48.870774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.342 [2024-11-19 21:22:48.870791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:8 PRP1 0x0 PRP2 0x0 00:33:26.342 [2024-11-19 21:22:48.870810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.342 [2024-11-19 21:22:48.870828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.342 [2024-11-19 21:22:48.870845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.342 [2024-11-19 21:22:48.870862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24168 len:8 PRP1 0x0 PRP2 0x0 00:33:26.342 [2024-11-19 21:22:48.870880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.342 [2024-11-19 21:22:48.870899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.342 [2024-11-19 21:22:48.870919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.342 [2024-11-19 21:22:48.870938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24176 len:8 PRP1 0x0 PRP2 0x0 00:33:26.342 [2024-11-19 21:22:48.870956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.342 [2024-11-19 21:22:48.870975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.342 [2024-11-19 21:22:48.870991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.342 [2024-11-19 21:22:48.871009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24184 len:8 PRP1 0x0 PRP2 0x0 00:33:26.342 [2024-11-19 21:22:48.871027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.342 [2024-11-19 21:22:48.871046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.342 [2024-11-19 21:22:48.871089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.343 [2024-11-19 21:22:48.871119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:8 PRP1 0x0 PRP2 0x0 00:33:26.343 [2024-11-19 21:22:48.871139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.343 [2024-11-19 21:22:48.871168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.343 [2024-11-19 21:22:48.871186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.343 [2024-11-19 21:22:48.871204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24200 len:8 PRP1 0x0 PRP2 0x0 00:33:26.343 [2024-11-19 21:22:48.871223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.343 [2024-11-19 21:22:48.871244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.343 [2024-11-19 21:22:48.871261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.343 [2024-11-19 21:22:48.871279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24208 len:8 PRP1 0x0 PRP2 0x0 00:33:26.343 [2024-11-19 21:22:48.871298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.343 [2024-11-19 21:22:48.871318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.343 [2024-11-19 21:22:48.871342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.343 [2024-11-19 21:22:48.871374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24216 len:8 PRP1 0x0 PRP2 0x0 00:33:26.343 [2024-11-19 21:22:48.871394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.343 [2024-11-19 21:22:48.871427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.343 [2024-11-19 21:22:48.871445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.343 [2024-11-19 21:22:48.871463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:8 PRP1 0x0 PRP2 0x0 00:33:26.343 [2024-11-19 21:22:48.871481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.343 [2024-11-19 21:22:48.871501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.343 [2024-11-19 21:22:48.871516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.343 [2024-11-19 21:22:48.871533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24232 len:8 PRP1 0x0 PRP2 0x0 00:33:26.343 [2024-11-19 21:22:48.871551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.343 [2024-11-19 21:22:48.871575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.343 [2024-11-19 21:22:48.871603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.343 [2024-11-19 21:22:48.871621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24240 len:8 PRP1 0x0 PRP2 0x0 00:33:26.343 [2024-11-19 21:22:48.871640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.343 [2024-11-19 21:22:48.871659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.343 [2024-11-19 21:22:48.871676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.343 [2024-11-19 21:22:48.871693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24248 len:8 PRP1 0x0 PRP2 0x0 00:33:26.343 [2024-11-19 21:22:48.871712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.343 [2024-11-19 21:22:48.871731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.343 [2024-11-19 21:22:48.871747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.343 [2024-11-19 21:22:48.871765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:8 PRP1 0x0 PRP2 0x0 00:33:26.343 [2024-11-19 21:22:48.871783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.343 [2024-11-19 21:22:48.871801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.343 [2024-11-19 21:22:48.871817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.343 [2024-11-19 21:22:48.871834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24264 len:8 PRP1 0x0 PRP2 0x0 00:33:26.343 [2024-11-19 21:22:48.871852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.343 [2024-11-19 21:22:48.871871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.343 [2024-11-19 21:22:48.871893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.343 [2024-11-19 21:22:48.871911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24272 len:8 PRP1 0x0 PRP2 0x0 00:33:26.343 [2024-11-19 21:22:48.871930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.343 [2024-11-19 21:22:48.871949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.343 [2024-11-19 21:22:48.871965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.343 [2024-11-19 21:22:48.871982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24280 len:8 PRP1 0x0 PRP2 0x0 00:33:26.343 [2024-11-19 21:22:48.872000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.343 [2024-11-19 21:22:48.872019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.343 [2024-11-19 21:22:48.872036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.343 [2024-11-19 21:22:48.872076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:8 PRP1 0x0 PRP2 0x0 00:33:26.343 [2024-11-19 21:22:48.872099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.343 [2024-11-19 21:22:48.872125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.343 [2024-11-19 21:22:48.872142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.343 [2024-11-19 21:22:48.872160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24296 len:8 PRP1 0x0 PRP2 0x0 00:33:26.343 [2024-11-19 21:22:48.872184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.343 [2024-11-19 21:22:48.872205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.343 [2024-11-19 21:22:48.872222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.343 [2024-11-19 21:22:48.872240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24304 len:8 PRP1 0x0 PRP2 0x0 00:33:26.343 [2024-11-19 21:22:48.872260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.343 [2024-11-19 21:22:48.872280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.343 [2024-11-19 21:22:48.872297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.343 [2024-11-19 21:22:48.872314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24312 len:8 PRP1 0x0 PRP2 0x0 00:33:26.343 [2024-11-19 21:22:48.872333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.343 [2024-11-19 21:22:48.872379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.343 [2024-11-19 21:22:48.872396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.343 [2024-11-19 21:22:48.872413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:8 PRP1 0x0 PRP2 0x0 00:33:26.343 [2024-11-19 21:22:48.872448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.343 [2024-11-19 21:22:48.872467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.343 [2024-11-19 21:22:48.872483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.343 [2024-11-19 21:22:48.872500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24328 len:8 PRP1 0x0 PRP2 0x0 00:33:26.343 [2024-11-19 21:22:48.872518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.343 [2024-11-19 21:22:48.872537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.343 [2024-11-19 21:22:48.872559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.344 [2024-11-19 21:22:48.872576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24336 len:8 PRP1 0x0 PRP2 0x0 00:33:26.344 [2024-11-19 21:22:48.872596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.344 [2024-11-19 21:22:48.872614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.344 [2024-11-19 21:22:48.872630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.344 [2024-11-19 21:22:48.872647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24344 len:8 PRP1 0x0 PRP2 0x0 00:33:26.344 [2024-11-19 21:22:48.872666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.344 [2024-11-19 21:22:48.872685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.344 [2024-11-19 21:22:48.872701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.344 [2024-11-19 21:22:48.872723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:8 PRP1 0x0 PRP2 0x0 00:33:26.344 [2024-11-19 21:22:48.872742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.344 [2024-11-19 21:22:48.872760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.344 [2024-11-19 21:22:48.872776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.344 [2024-11-19 21:22:48.872796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24360 len:8 PRP1 0x0 PRP2 0x0 00:33:26.344 [2024-11-19 21:22:48.872815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.344 [2024-11-19 21:22:48.872834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.344 [2024-11-19 21:22:48.872850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.344 [2024-11-19 21:22:48.872866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24368 len:8 PRP1 0x0 PRP2 0x0 00:33:26.344 [2024-11-19 21:22:48.872884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.344 [2024-11-19 21:22:48.872902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.344 [2024-11-19 21:22:48.872918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.344 [2024-11-19 21:22:48.872935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24376 len:8 PRP1 0x0 PRP2 0x0 00:33:26.344 [2024-11-19 21:22:48.872952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.344 [2024-11-19 21:22:48.872971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.344 [2024-11-19 21:22:48.872987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.344 [2024-11-19 21:22:48.873004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:8 PRP1 0x0 PRP2 0x0 00:33:26.344 [2024-11-19 21:22:48.873021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.344 [2024-11-19 21:22:48.873039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.344 [2024-11-19 21:22:48.873077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.344 [2024-11-19 21:22:48.873098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24392 len:8 PRP1 0x0 PRP2 0x0 00:33:26.344 [2024-11-19 21:22:48.873117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.344 [2024-11-19 21:22:48.873138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.344 [2024-11-19 21:22:48.873156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.344 [2024-11-19 21:22:48.873174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24400 len:8 PRP1 0x0 PRP2 0x0 00:33:26.344 [2024-11-19 21:22:48.873193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.344 [2024-11-19 21:22:48.873212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.344 [2024-11-19 21:22:48.873229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.344 [2024-11-19 21:22:48.873247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23680 len:8 PRP1 0x0 PRP2 0x0 00:33:26.344 [2024-11-19 21:22:48.873265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.344 [2024-11-19 21:22:48.873285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.344 [2024-11-19 21:22:48.873302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.344 [2024-11-19 21:22:48.873326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23688 len:8 PRP1 0x0 PRP2 0x0 00:33:26.344 [2024-11-19 21:22:48.873346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.344 [2024-11-19 21:22:48.873381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.344 [2024-11-19 21:22:48.873402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.344 [2024-11-19 21:22:48.873419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23696 len:8 PRP1 0x0 PRP2 0x0 00:33:26.344 [2024-11-19 21:22:48.873437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.344 [2024-11-19 21:22:48.873455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.344 [2024-11-19 21:22:48.873472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.344 [2024-11-19 21:22:48.873488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23704 len:8 PRP1 0x0 PRP2 0x0 00:33:26.344 [2024-11-19 21:22:48.873506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.344 [2024-11-19 21:22:48.873525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.344 [2024-11-19 21:22:48.873540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.344 [2024-11-19 21:22:48.873557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23712 len:8 PRP1 0x0 PRP2 0x0 00:33:26.344 [2024-11-19 21:22:48.873574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.344 [2024-11-19 21:22:48.873593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.344 [2024-11-19 21:22:48.873608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.344 [2024-11-19 21:22:48.873624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23720 len:8 PRP1 0x0 PRP2 0x0 00:33:26.344 [2024-11-19 21:22:48.873642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.344 [2024-11-19 21:22:48.873660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.344 [2024-11-19 21:22:48.873676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.344 [2024-11-19 21:22:48.873692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23728 len:8 PRP1 0x0 PRP2 0x0 00:33:26.344 [2024-11-19 21:22:48.873710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.345 [2024-11-19 21:22:48.873744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.345 [2024-11-19 21:22:48.873761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.345 [2024-11-19 21:22:48.873778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24408 len:8 PRP1 0x0 PRP2 0x0 00:33:26.345 [2024-11-19 21:22:48.873797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.345 [2024-11-19 21:22:48.873816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.345 [2024-11-19 21:22:48.873833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.345 [2024-11-19 21:22:48.873849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:8 PRP1 0x0 PRP2 0x0 00:33:26.345 [2024-11-19 21:22:48.873868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.345 [2024-11-19 21:22:48.873887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.345 [2024-11-19 21:22:48.873903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.345 [2024-11-19 21:22:48.873925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24424 len:8 PRP1 0x0 PRP2 0x0 00:33:26.345 [2024-11-19 21:22:48.873944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.345 [2024-11-19 21:22:48.873968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.345 [2024-11-19 21:22:48.873986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.345 [2024-11-19 21:22:48.874002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24432 len:8 PRP1 0x0 PRP2 0x0 00:33:26.345 [2024-11-19 21:22:48.874020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.345 [2024-11-19 21:22:48.874040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.345 [2024-11-19 21:22:48.874135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.345 [2024-11-19 21:22:48.874155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24440 len:8 PRP1 0x0 PRP2 0x0 00:33:26.345 [2024-11-19 21:22:48.874175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.345 [2024-11-19 21:22:48.874197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.345 [2024-11-19 21:22:48.874215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.345 [2024-11-19 21:22:48.874232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:8 PRP1 0x0 PRP2 0x0 00:33:26.345 [2024-11-19 21:22:48.874252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.345 [2024-11-19 21:22:48.874272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.345 [2024-11-19 21:22:48.874289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.345 [2024-11-19 21:22:48.874306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24456 len:8 PRP1 0x0 PRP2 0x0 00:33:26.345 [2024-11-19 21:22:48.874325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.345 [2024-11-19 21:22:48.874345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.345 [2024-11-19 21:22:48.874362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.345 [2024-11-19 21:22:48.874379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24464 len:8 PRP1 0x0 PRP2 0x0 00:33:26.345 [2024-11-19 21:22:48.874398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.345 [2024-11-19 21:22:48.874432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.345 [2024-11-19 21:22:48.874450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.345 [2024-11-19 21:22:48.874466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24472 len:8 PRP1 0x0 PRP2 0x0 00:33:26.345 [2024-11-19 21:22:48.874484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.345 [2024-11-19 21:22:48.874502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.345 [2024-11-19 21:22:48.874518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.345 [2024-11-19 21:22:48.874534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:8 PRP1 0x0 PRP2 0x0 00:33:26.345 [2024-11-19 21:22:48.874552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.345 [2024-11-19 21:22:48.874570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.345 [2024-11-19 21:22:48.874586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.345 [2024-11-19 21:22:48.874608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24488 len:8 PRP1 0x0 PRP2 0x0 00:33:26.345 [2024-11-19 21:22:48.874631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.345 [2024-11-19 21:22:48.874651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.345 [2024-11-19 21:22:48.874667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.345 [2024-11-19 21:22:48.874684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24496 len:8 PRP1 0x0 PRP2 0x0 00:33:26.345 [2024-11-19 21:22:48.874701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.345 [2024-11-19 21:22:48.874721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.345 [2024-11-19 21:22:48.874737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.345 [2024-11-19 21:22:48.874754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24504 len:8 PRP1 0x0 PRP2 0x0 00:33:26.345 [2024-11-19 21:22:48.874771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.345 [2024-11-19 21:22:48.874789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.345 [2024-11-19 21:22:48.874805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.345 [2024-11-19 21:22:48.874827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:8 PRP1 0x0 PRP2 0x0 00:33:26.345 [2024-11-19 21:22:48.874846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.345 [2024-11-19 21:22:48.889916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.345 [2024-11-19 21:22:48.889949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.345 [2024-11-19 21:22:48.889986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24520 len:8 PRP1 0x0 PRP2 0x0 00:33:26.345 [2024-11-19 21:22:48.890008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.345 [2024-11-19 21:22:48.890029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.346 [2024-11-19 21:22:48.890061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.346 [2024-11-19 21:22:48.890090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24528 len:8 PRP1 0x0 PRP2 0x0 00:33:26.346 [2024-11-19 21:22:48.890112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.346 [2024-11-19 21:22:48.890133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.346 [2024-11-19 21:22:48.890150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.346 [2024-11-19 21:22:48.890167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24536 len:8 PRP1 0x0 PRP2 0x0 00:33:26.346 [2024-11-19 21:22:48.890186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.346 [2024-11-19 21:22:48.890206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.346 [2024-11-19 21:22:48.890223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.346 [2024-11-19 21:22:48.890240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:8 PRP1 0x0 PRP2 0x0 00:33:26.346 [2024-11-19 21:22:48.890260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.346 [2024-11-19 21:22:48.890280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.346 [2024-11-19 21:22:48.890313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.346 [2024-11-19 21:22:48.890338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24552 len:8 PRP1 0x0 PRP2 0x0 00:33:26.346 [2024-11-19 21:22:48.890359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.346 [2024-11-19 21:22:48.890379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.346 [2024-11-19 21:22:48.890396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.346 [2024-11-19 21:22:48.890428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24560 len:8 PRP1 0x0 PRP2 0x0 00:33:26.346 [2024-11-19 21:22:48.890447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.346 [2024-11-19 21:22:48.890466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.346 [2024-11-19 21:22:48.890482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.346 [2024-11-19 21:22:48.890498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24568 len:8 PRP1 0x0 PRP2 0x0 00:33:26.346 [2024-11-19 21:22:48.890516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.346 [2024-11-19 21:22:48.890535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.346 [2024-11-19 21:22:48.890552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.346 [2024-11-19 21:22:48.890570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:8 PRP1 0x0 PRP2 0x0 00:33:26.346 [2024-11-19 21:22:48.890588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.346 [2024-11-19 21:22:48.890607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.346 [2024-11-19 21:22:48.890623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.346 [2024-11-19 21:22:48.890640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24584 len:8 PRP1 0x0 PRP2 0x0 00:33:26.346 [2024-11-19 21:22:48.890658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.346 [2024-11-19 21:22:48.890676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.346 [2024-11-19 21:22:48.890693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.346 [2024-11-19 21:22:48.890709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24592 len:8 PRP1 0x0 PRP2 0x0 00:33:26.346 [2024-11-19 21:22:48.890727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.346 [2024-11-19 21:22:48.891011] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:26.346 [2024-11-19 21:22:48.891040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:33:26.346 [2024-11-19 21:22:48.891162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:26.346 [2024-11-19 21:22:48.896464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:33:26.346 5742.40 IOPS, 22.43 MiB/s [2024-11-19T20:23:00.141Z] [2024-11-19 21:22:48.919798] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:33:26.346 5784.33 IOPS, 22.60 MiB/s [2024-11-19T20:23:00.141Z] 5844.14 IOPS, 22.83 MiB/s [2024-11-19T20:23:00.141Z] 5885.25 IOPS, 22.99 MiB/s [2024-11-19T20:23:00.141Z] 5917.44 IOPS, 23.12 MiB/s [2024-11-19T20:23:00.141Z] [2024-11-19 21:22:53.410866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.346 [2024-11-19 21:22:53.410926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.346 [2024-11-19 21:22:53.410977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.346 [2024-11-19 21:22:53.411001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.346 [2024-11-19 21:22:53.411026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.346 [2024-11-19 21:22:53.411047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.346 [2024-11-19 21:22:53.411078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.346 [2024-11-19 21:22:53.411118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.346 [2024-11-19 21:22:53.411143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.346 [2024-11-19 21:22:53.411165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.346 [2024-11-19 21:22:53.411188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.346 [2024-11-19 21:22:53.411209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.346 [2024-11-19 21:22:53.411232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.346 [2024-11-19 21:22:53.411253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.346 [2024-11-19 21:22:53.411277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.346 [2024-11-19 21:22:53.411298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.346 [2024-11-19 21:22:53.411322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.346 [2024-11-19 21:22:53.411343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.411367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.411403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.411427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.411447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.411470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.411491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.411514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.411534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.411562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.411584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.411606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.411627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.411651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.411672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.411695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.411715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.411738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.411759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.411783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.411804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.411826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.411847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.411871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.411891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.411914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.411935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.411976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.411998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.412021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.412042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.412094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.412118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.412143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.412168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.412193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.412215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.412238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.412260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.412284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.412305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.412328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.412349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.412388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.412411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.412434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.412455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.412478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.412498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.412522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.412543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.412566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.412587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.412610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.347 [2024-11-19 21:22:53.412631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.347 [2024-11-19 21:22:53.412655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.412676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.412699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.412719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.412742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.412766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.412790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.412811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.412834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.412855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.412878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.412899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.412922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.412943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.412966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.412986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.413009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.413029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.413052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.413093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.413121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.413142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.413165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.413187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.413210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.413232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.413255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.413277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.413300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.413321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.413353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.413376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.413414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.413435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.413458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.413478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.413501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.413522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.413544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.413565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.413588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.413609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.413648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.413670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.413694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.348 [2024-11-19 21:22:53.413715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.348 [2024-11-19 21:22:53.413739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.413760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.413783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.413804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.413828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.413849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.413873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.413894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.413917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.413942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.413967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.413989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.414012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.414033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.414057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.414101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.414140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.414162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.414187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.414208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.414233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.414256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.414280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.414302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.414326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.414347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.414372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.414408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.414439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.414459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.414482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.414505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.414529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.414550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.414577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.414599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.414622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.414644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.414667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.414689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.414712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.414733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.414757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.414778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.414801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.414822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.414845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.414866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.414889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.414910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.414934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.414955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.414980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.415001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.415039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.349 [2024-11-19 21:22:53.415061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.349 [2024-11-19 21:22:53.415111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.415134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.415158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.415185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.415210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.415232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.415256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.415278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.415303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.350 [2024-11-19 21:22:53.415325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.415350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.415372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.415412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.415434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.415458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.415479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.415512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.415533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.415557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.415578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.415602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.415623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.415646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.415667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.415691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.415712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.415735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.415756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.415780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.415806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.415831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.415853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.415877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.415898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.415921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.415942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.415966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.415986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.416009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.416031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.416054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.416099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.416127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.416149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.416173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.416195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.416219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.416241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.416265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.416286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.416310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.416332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.416356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.416377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.416423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.416445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.416469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.416490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.416513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.416534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.416559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.416580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.416603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.416624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.416647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.416668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.416691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.416712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.416735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.416756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.350 [2024-11-19 21:22:53.416779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.350 [2024-11-19 21:22:53.416800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.351 [2024-11-19 21:22:53.416823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.351 [2024-11-19 21:22:53.416844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.351 [2024-11-19 21:22:53.416892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.351 [2024-11-19 21:22:53.416918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122000 len:8 PRP1 0x0 PRP2 0x0 00:33:26.351 [2024-11-19 21:22:53.416939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.351 [2024-11-19 21:22:53.416965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.351 [2024-11-19 21:22:53.416984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.351 [2024-11-19 21:22:53.417001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122008 len:8 PRP1 0x0 PRP2 0x0 00:33:26.351 [2024-11-19 21:22:53.417026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.351 [2024-11-19 21:22:53.417047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.351 [2024-11-19 21:22:53.417063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.351 [2024-11-19 21:22:53.417105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122016 len:8 PRP1 0x0 PRP2 0x0 00:33:26.351 [2024-11-19 21:22:53.417126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.351 [2024-11-19 21:22:53.417147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:26.351 [2024-11-19 21:22:53.417165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:26.351 [2024-11-19 21:22:53.417182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122024 len:8 PRP1 0x0 PRP2 0x0 00:33:26.351 [2024-11-19 21:22:53.417201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.351 [2024-11-19 21:22:53.417485] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:26.351 [2024-11-19 21:22:53.417556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.351 [2024-11-19 21:22:53.417583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.351 [2024-11-19 21:22:53.417606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.351 [2024-11-19 21:22:53.417627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.351 [2024-11-19 21:22:53.417648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.351 [2024-11-19 21:22:53.417669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.351 [2024-11-19 21:22:53.417690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:26.351 [2024-11-19 21:22:53.417711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:26.351 [2024-11-19 21:22:53.417730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:33:26.351 [2024-11-19 21:22:53.417808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:26.351 [2024-11-19 21:22:53.421657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:33:26.351 [2024-11-19 21:22:53.539601] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:33:26.351 5847.20 IOPS, 22.84 MiB/s [2024-11-19T20:23:00.146Z] 5887.64 IOPS, 23.00 MiB/s [2024-11-19T20:23:00.146Z] 5920.83 IOPS, 23.13 MiB/s [2024-11-19T20:23:00.146Z] 5941.08 IOPS, 23.21 MiB/s [2024-11-19T20:23:00.146Z] 5960.43 IOPS, 23.28 MiB/s 00:33:26.351 Latency(us) 00:33:26.351 [2024-11-19T20:23:00.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.351 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:26.351 Verification LBA range: start 0x0 length 0x4000 00:33:26.351 NVMe0n1 : 15.01 5969.75 23.32 805.50 0.00 18857.47 1104.40 55147.33 00:33:26.351 [2024-11-19T20:23:00.146Z] =================================================================================================================== 00:33:26.351 [2024-11-19T20:23:00.146Z] Total : 5969.75 23.32 805.50 0.00 18857.47 1104.40 55147.33 00:33:26.351 Received shutdown signal, test time was about 15.000000 seconds 00:33:26.351 00:33:26.351 Latency(us) 00:33:26.351 [2024-11-19T20:23:00.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.351 [2024-11-19T20:23:00.146Z] =================================================================================================================== 00:33:26.351 [2024-11-19T20:23:00.146Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:26.351 21:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:26.351 21:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:26.351 21:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:26.351 21:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3125923 00:33:26.351 21:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:26.351 21:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3125923 /var/tmp/bdevperf.sock 00:33:26.351 21:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3125923 ']' 00:33:26.351 21:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:26.351 21:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:26.351 21:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:26.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:26.351 21:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:26.351 21:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:27.286 21:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:27.286 21:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:27.286 21:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:27.544 [2024-11-19 21:23:01.080204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:27.544 21:23:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:27.803 [2024-11-19 21:23:01.341063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:27.803 21:23:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:28.061 NVMe0n1 00:33:28.061 21:23:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:28.319 00:33:28.319 21:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:28.885 00:33:28.885 21:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:28.885 21:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:29.143 21:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:29.401 21:23:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:32.681 21:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:32.681 21:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:32.682 21:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3126714 00:33:32.682 21:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:32.682 21:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3126714 00:33:34.055 { 00:33:34.055 "results": [ 00:33:34.055 { 00:33:34.055 "job": "NVMe0n1", 00:33:34.055 "core_mask": "0x1", 00:33:34.055 "workload": "verify", 00:33:34.055 "status": "finished", 00:33:34.055 "verify_range": { 00:33:34.055 "start": 0, 00:33:34.055 "length": 16384 00:33:34.055 }, 00:33:34.055 "queue_depth": 128, 00:33:34.055 "io_size": 4096, 00:33:34.055 "runtime": 1.016427, 00:33:34.055 "iops": 6163.74810980031, 00:33:34.055 "mibps": 24.07714105390746, 00:33:34.055 "io_failed": 0, 00:33:34.055 "io_timeout": 0, 00:33:34.055 "avg_latency_us": 20625.473802843546, 00:33:34.055 "min_latency_us": 3228.254814814815, 00:33:34.055 "max_latency_us": 19806.435555555556 00:33:34.055 } 00:33:34.055 ], 00:33:34.055 "core_count": 1 00:33:34.055 } 00:33:34.055 21:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:34.055 [2024-11-19 21:22:59.857977] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:33:34.055 [2024-11-19 21:22:59.858159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3125923 ] 00:33:34.055 [2024-11-19 21:22:59.996909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.055 [2024-11-19 21:23:00.124446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.055 [2024-11-19 21:23:03.100483] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:34.055 [2024-11-19 21:23:03.100617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.055 [2024-11-19 21:23:03.100657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.055 [2024-11-19 21:23:03.100685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.055 [2024-11-19 21:23:03.100707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.055 [2024-11-19 21:23:03.100729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.055 [2024-11-19 21:23:03.100765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.055 [2024-11-19 21:23:03.100788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.055 [2024-11-19 21:23:03.100810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.055 [2024-11-19 21:23:03.100832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:33:34.055 [2024-11-19 21:23:03.100932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:33:34.055 [2024-11-19 21:23:03.100986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:34.055 [2024-11-19 21:23:03.110312] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:33:34.055 Running I/O for 1 seconds... 00:33:34.055 6107.00 IOPS, 23.86 MiB/s 00:33:34.055 Latency(us) 00:33:34.055 [2024-11-19T20:23:07.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.055 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:34.056 Verification LBA range: start 0x0 length 0x4000 00:33:34.056 NVMe0n1 : 1.02 6163.75 24.08 0.00 0.00 20625.47 3228.25 19806.44 00:33:34.056 [2024-11-19T20:23:07.851Z] =================================================================================================================== 00:33:34.056 [2024-11-19T20:23:07.851Z] Total : 6163.75 24.08 0.00 0.00 20625.47 3228.25 19806.44 00:33:34.056 21:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:34.056 21:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:34.056 21:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:34.313 21:23:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:34.313 21:23:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:34.572 21:23:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:35.137 21:23:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:38.417 21:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:38.417 21:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:38.417 21:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3125923 00:33:38.417 21:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3125923 ']' 00:33:38.417 21:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3125923 00:33:38.417 21:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:38.417 21:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:38.417 21:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3125923 00:33:38.417 21:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:38.417 21:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:38.417 21:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3125923' 00:33:38.417 killing process with pid 3125923 00:33:38.417 21:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3125923 00:33:38.417 21:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3125923 00:33:38.983 21:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:39.240 21:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:39.498 rmmod nvme_tcp 00:33:39.498 rmmod nvme_fabrics 00:33:39.498 rmmod nvme_keyring 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3123385 ']' 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3123385 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3123385 ']' 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3123385 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3123385 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3123385' 00:33:39.498 killing process with pid 3123385 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3123385 00:33:39.498 21:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3123385 00:33:40.875 21:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:40.875 21:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:40.875 21:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:40.875 21:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:33:40.875 21:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:33:40.875 21:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:40.875 21:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:33:40.875 21:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:40.875 21:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:40.875 21:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:40.875 21:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:40.875 21:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:42.778 21:23:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:42.778 00:33:42.778 real 0m40.313s 00:33:42.778 user 2m21.826s 00:33:42.778 sys 0m6.261s 00:33:42.778 21:23:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:42.778 21:23:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:42.778 ************************************ 00:33:42.778 END TEST nvmf_failover 00:33:42.778 ************************************ 00:33:42.778 21:23:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:42.778 21:23:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:42.778 21:23:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:42.778 21:23:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.778 ************************************ 00:33:42.778 START TEST nvmf_host_discovery 00:33:42.778 ************************************ 00:33:42.778 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:42.778 * Looking for test storage... 00:33:42.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:42.778 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:42.778 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:33:42.778 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:43.037 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:43.037 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:43.037 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:43.037 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:43.037 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:33:43.037 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:33:43.037 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:33:43.037 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:33:43.037 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:33:43.037 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:33:43.037 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:33:43.037 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:43.037 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:33:43.037 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:43.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.038 --rc genhtml_branch_coverage=1 00:33:43.038 --rc genhtml_function_coverage=1 00:33:43.038 --rc genhtml_legend=1 00:33:43.038 --rc geninfo_all_blocks=1 00:33:43.038 --rc geninfo_unexecuted_blocks=1 00:33:43.038 00:33:43.038 ' 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:43.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.038 --rc genhtml_branch_coverage=1 00:33:43.038 --rc genhtml_function_coverage=1 00:33:43.038 --rc genhtml_legend=1 00:33:43.038 --rc geninfo_all_blocks=1 00:33:43.038 --rc geninfo_unexecuted_blocks=1 00:33:43.038 00:33:43.038 ' 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:43.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.038 --rc genhtml_branch_coverage=1 00:33:43.038 --rc genhtml_function_coverage=1 00:33:43.038 --rc genhtml_legend=1 00:33:43.038 --rc geninfo_all_blocks=1 00:33:43.038 --rc geninfo_unexecuted_blocks=1 00:33:43.038 00:33:43.038 ' 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:43.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.038 --rc genhtml_branch_coverage=1 00:33:43.038 --rc genhtml_function_coverage=1 00:33:43.038 --rc genhtml_legend=1 00:33:43.038 --rc geninfo_all_blocks=1 00:33:43.038 --rc geninfo_unexecuted_blocks=1 00:33:43.038 00:33:43.038 ' 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:43.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:33:43.038 21:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.025 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:45.025 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:33:45.025 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:45.025 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:45.025 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:45.025 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:45.025 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:45.025 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:33:45.025 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:45.025 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:33:45.025 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:33:45.025 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:33:45.025 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:33:45.025 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:33:45.025 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:33:45.025 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:45.025 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:45.025 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:45.026 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:45.026 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:45.026 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:45.026 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:45.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:45.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:33:45.026 00:33:45.026 --- 10.0.0.2 ping statistics --- 00:33:45.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.026 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:33:45.026 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:45.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:45.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:33:45.284 00:33:45.284 --- 10.0.0.1 ping statistics --- 00:33:45.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.284 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3129581 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3129581 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3129581 ']' 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:45.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:45.284 21:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.284 [2024-11-19 21:23:18.944575] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:33:45.284 [2024-11-19 21:23:18.944700] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:45.541 [2024-11-19 21:23:19.096372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.541 [2024-11-19 21:23:19.232684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:45.541 [2024-11-19 21:23:19.232775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:45.541 [2024-11-19 21:23:19.232800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:45.541 [2024-11-19 21:23:19.232825] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:45.541 [2024-11-19 21:23:19.232844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:45.541 [2024-11-19 21:23:19.234458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.473 [2024-11-19 21:23:20.029476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.473 [2024-11-19 21:23:20.037670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.473 null0 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.473 null1 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3129739 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3129739 /tmp/host.sock 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3129739 ']' 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:46.473 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:46.473 21:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.473 [2024-11-19 21:23:20.156671] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:33:46.473 [2024-11-19 21:23:20.156857] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3129739 ] 00:33:46.731 [2024-11-19 21:23:20.305635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.731 [2024-11-19 21:23:20.443401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.667 [2024-11-19 21:23:21.397569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:47.667 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:33:47.926 21:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:48.493 [2024-11-19 21:23:22.181255] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:48.493 [2024-11-19 21:23:22.181297] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:48.493 [2024-11-19 21:23:22.181334] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:48.493 [2024-11-19 21:23:22.267684] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:48.751 [2024-11-19 21:23:22.368858] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:48.751 [2024-11-19 21:23:22.370453] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2a00:1 started. 00:33:48.751 [2024-11-19 21:23:22.372860] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:48.751 [2024-11-19 21:23:22.372896] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:48.751 [2024-11-19 21:23:22.420449] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2a00 was disconnected and freed. delete nvme_qpair. 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:49.010 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:49.011 [2024-11-19 21:23:22.753502] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2c80:1 started. 00:33:49.011 [2024-11-19 21:23:22.760373] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2c80 was disconnected and freed. delete nvme_qpair. 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.011 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.269 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:49.269 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:49.269 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:49.269 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:49.269 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:49.269 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.269 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.269 [2024-11-19 21:23:22.831229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:49.269 [2024-11-19 21:23:22.831875] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:49.269 [2024-11-19 21:23:22.831928] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:49.270 21:23:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:49.270 [2024-11-19 21:23:22.961150] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:49.528 [2024-11-19 21:23:23.226034] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:33:49.528 [2024-11-19 21:23:23.226149] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:49.528 [2024-11-19 21:23:23.226175] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:49.528 [2024-11-19 21:23:23.226190] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:50.465 21:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:50.465 21:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:50.465 21:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:50.465 21:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:50.465 21:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:50.465 21:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.465 21:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:50.465 21:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.465 21:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:50.465 21:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.465 21:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:50.465 21:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:50.465 21:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:50.465 21:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:50.465 21:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:50.465 21:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:50.465 21:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:50.465 21:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:50.465 21:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:50.465 21:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.465 [2024-11-19 21:23:24.043500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.465 [2024-11-19 21:23:24.043567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.465 [2024-11-19 21:23:24.043597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.465 [2024-11-19 21:23:24.043619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.465 [2024-11-19 21:23:24.043650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.465 [2024-11-19 21:23:24.043670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.465 [2024-11-19 21:23:24.043697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.465 [2024-11-19 21:23:24.043719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.465 [2024-11-19 21:23:24.043754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:50.465 [2024-11-19 21:23:24.043884] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:50.465 [2024-11-19 21:23:24.043931] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:50.465 [2024-11-19 21:23:24.053473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:50.465 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.465 [2024-11-19 21:23:24.063501] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:50.465 [2024-11-19 21:23:24.063549] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:50.465 [2024-11-19 21:23:24.063575] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:50.465 [2024-11-19 21:23:24.063592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:50.465 [2024-11-19 21:23:24.063662] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:50.465 [2024-11-19 21:23:24.063950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-11-19 21:23:24.064010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:50.465 [2024-11-19 21:23:24.064047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:50.465 [2024-11-19 21:23:24.064118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:50.465 [2024-11-19 21:23:24.064170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:50.465 [2024-11-19 21:23:24.064196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:50.465 [2024-11-19 21:23:24.064227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:50.465 [2024-11-19 21:23:24.064253] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:50.465 [2024-11-19 21:23:24.064270] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:50.465 [2024-11-19 21:23:24.064284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:50.465 [2024-11-19 21:23:24.073701] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:50.465 [2024-11-19 21:23:24.073739] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:50.465 [2024-11-19 21:23:24.073757] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:50.465 [2024-11-19 21:23:24.073771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:50.465 [2024-11-19 21:23:24.073811] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:50.465 [2024-11-19 21:23:24.073993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.465 [2024-11-19 21:23:24.074034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:50.466 [2024-11-19 21:23:24.074060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:50.466 [2024-11-19 21:23:24.074125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:50.466 [2024-11-19 21:23:24.074157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:50.466 [2024-11-19 21:23:24.074178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:50.466 [2024-11-19 21:23:24.074198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:50.466 [2024-11-19 21:23:24.074231] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:50.466 [2024-11-19 21:23:24.074246] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:50.466 [2024-11-19 21:23:24.074257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:50.466 [2024-11-19 21:23:24.083855] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:50.466 [2024-11-19 21:23:24.083893] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:50.466 [2024-11-19 21:23:24.083911] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:50.466 [2024-11-19 21:23:24.083926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:50.466 [2024-11-19 21:23:24.083965] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:50.466 [2024-11-19 21:23:24.084164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-11-19 21:23:24.084201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:50.466 [2024-11-19 21:23:24.084225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:50.466 [2024-11-19 21:23:24.084258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:50.466 [2024-11-19 21:23:24.084304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:50.466 [2024-11-19 21:23:24.084329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:50.466 [2024-11-19 21:23:24.084366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:50.466 [2024-11-19 21:23:24.084401] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:50.466 [2024-11-19 21:23:24.084426] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:50.466 [2024-11-19 21:23:24.084440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:50.466 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.466 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:50.466 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:50.466 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:50.466 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:50.466 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:50.466 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:50.466 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:50.466 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:50.466 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.466 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:50.466 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.466 [2024-11-19 21:23:24.094010] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:50.466 [2024-11-19 21:23:24.094060] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:50.466 [2024-11-19 21:23:24.094089] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:50.466 [2024-11-19 21:23:24.094131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:50.466 [2024-11-19 21:23:24.094179] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:50.466 [2024-11-19 21:23:24.094329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:50.466 [2024-11-19 21:23:24.094367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:50.466 [2024-11-19 21:23:24.094395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:50.466 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:50.466 [2024-11-19 21:23:24.094435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:50.466 [2024-11-19 21:23:24.094492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:50.466 [2024-11-19 21:23:24.094520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:50.466 [2024-11-19 21:23:24.094542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:50.466 [2024-11-19 21:23:24.094580] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:50.466 [2024-11-19 21:23:24.094597] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:50.466 [2024-11-19 21:23:24.094616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:50.466 [2024-11-19 21:23:24.104219] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:50.466 [2024-11-19 21:23:24.104257] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:50.466 [2024-11-19 21:23:24.104274] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:50.466 [2024-11-19 21:23:24.104302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:50.466 [2024-11-19 21:23:24.104339] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:50.466 [2024-11-19 21:23:24.104555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-11-19 21:23:24.104596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:50.466 [2024-11-19 21:23:24.104637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:50.466 [2024-11-19 21:23:24.104675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:50.466 [2024-11-19 21:23:24.104710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:50.466 [2024-11-19 21:23:24.104734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:50.466 [2024-11-19 21:23:24.104755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:50.466 [2024-11-19 21:23:24.104774] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:50.466 [2024-11-19 21:23:24.104790] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:50.466 [2024-11-19 21:23:24.104803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:50.466 [2024-11-19 21:23:24.114381] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:50.466 [2024-11-19 21:23:24.114428] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:50.466 [2024-11-19 21:23:24.114447] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:50.466 [2024-11-19 21:23:24.114461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:50.466 [2024-11-19 21:23:24.114512] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:50.466 [2024-11-19 21:23:24.114699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-11-19 21:23:24.114740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:50.466 [2024-11-19 21:23:24.114766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:50.466 [2024-11-19 21:23:24.114802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:50.466 [2024-11-19 21:23:24.114837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:50.466 [2024-11-19 21:23:24.114861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:50.466 [2024-11-19 21:23:24.114883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:50.466 [2024-11-19 21:23:24.114902] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:50.466 [2024-11-19 21:23:24.114923] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:50.466 [2024-11-19 21:23:24.114938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:50.466 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.466 [2024-11-19 21:23:24.124556] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:50.466 [2024-11-19 21:23:24.124594] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:50.466 [2024-11-19 21:23:24.124612] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:50.466 [2024-11-19 21:23:24.124626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:50.466 [2024-11-19 21:23:24.124666] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:50.466 [2024-11-19 21:23:24.124818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.466 [2024-11-19 21:23:24.124858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:50.466 [2024-11-19 21:23:24.124883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:50.467 [2024-11-19 21:23:24.124919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:50.467 [2024-11-19 21:23:24.124953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:50.467 [2024-11-19 21:23:24.124977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:50.467 [2024-11-19 21:23:24.124999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:50.467 [2024-11-19 21:23:24.125019] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:50.467 [2024-11-19 21:23:24.125035] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:50.467 [2024-11-19 21:23:24.125048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:50.467 [2024-11-19 21:23:24.130201] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:50.467 [2024-11-19 21:23:24.130241] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:50.467 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.658 [2024-11-19 21:23:25.399960] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:51.658 [2024-11-19 21:23:25.400000] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:51.658 [2024-11-19 21:23:25.400052] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:51.917 [2024-11-19 21:23:25.527576] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:51.917 [2024-11-19 21:23:25.631659] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:33:51.917 [2024-11-19 21:23:25.633044] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x6150001f3e00:1 started. 00:33:51.917 [2024-11-19 21:23:25.635911] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:51.917 [2024-11-19 21:23:25.635973] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:51.917 [2024-11-19 21:23:25.638868] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x6150001f3e00 was disconnected and freed. delete nvme_qpair. 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.917 request: 00:33:51.917 { 00:33:51.917 "name": "nvme", 00:33:51.917 "trtype": "tcp", 00:33:51.917 "traddr": "10.0.0.2", 00:33:51.917 "adrfam": "ipv4", 00:33:51.917 "trsvcid": "8009", 00:33:51.917 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:51.917 "wait_for_attach": true, 00:33:51.917 "method": "bdev_nvme_start_discovery", 00:33:51.917 "req_id": 1 00:33:51.917 } 00:33:51.917 Got JSON-RPC error response 00:33:51.917 response: 00:33:51.917 { 00:33:51.917 "code": -17, 00:33:51.917 "message": "File exists" 00:33:51.917 } 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:51.917 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:52.175 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.175 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:52.175 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:52.175 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:52.175 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:52.175 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:52.175 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:52.175 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:52.175 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:52.175 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:52.175 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.175 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.175 request: 00:33:52.175 { 00:33:52.175 "name": "nvme_second", 00:33:52.175 "trtype": "tcp", 00:33:52.175 "traddr": "10.0.0.2", 00:33:52.175 "adrfam": "ipv4", 00:33:52.175 "trsvcid": "8009", 00:33:52.175 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:52.175 "wait_for_attach": true, 00:33:52.175 "method": "bdev_nvme_start_discovery", 00:33:52.175 "req_id": 1 00:33:52.175 } 00:33:52.175 Got JSON-RPC error response 00:33:52.175 response: 00:33:52.175 { 00:33:52.175 "code": -17, 00:33:52.175 "message": "File exists" 00:33:52.175 } 00:33:52.175 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:52.175 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:52.175 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:52.175 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.176 21:23:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:53.110 [2024-11-19 21:23:26.847792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.110 [2024-11-19 21:23:26.847886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4080 with addr=10.0.0.2, port=8010 00:33:53.110 [2024-11-19 21:23:26.847981] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:53.110 [2024-11-19 21:23:26.848012] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:53.110 [2024-11-19 21:23:26.848047] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:54.484 [2024-11-19 21:23:27.850214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.484 [2024-11-19 21:23:27.850282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=8010 00:33:54.484 [2024-11-19 21:23:27.850355] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:54.484 [2024-11-19 21:23:27.850400] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:54.484 [2024-11-19 21:23:27.850437] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:55.418 [2024-11-19 21:23:28.852276] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:55.418 request: 00:33:55.418 { 00:33:55.418 "name": "nvme_second", 00:33:55.418 "trtype": "tcp", 00:33:55.418 "traddr": "10.0.0.2", 00:33:55.418 "adrfam": "ipv4", 00:33:55.418 "trsvcid": "8010", 00:33:55.418 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:55.418 "wait_for_attach": false, 00:33:55.418 "attach_timeout_ms": 3000, 00:33:55.418 "method": "bdev_nvme_start_discovery", 00:33:55.418 "req_id": 1 00:33:55.418 } 00:33:55.418 Got JSON-RPC error response 00:33:55.418 response: 00:33:55.418 { 00:33:55.418 "code": -110, 00:33:55.418 "message": "Connection timed out" 00:33:55.418 } 00:33:55.418 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:55.418 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:55.418 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:55.418 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:55.418 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:55.418 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:55.418 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:55.418 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.418 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:55.418 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:55.418 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:55.418 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:55.418 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.418 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:55.418 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:55.418 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3129739 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:55.419 rmmod nvme_tcp 00:33:55.419 rmmod nvme_fabrics 00:33:55.419 rmmod nvme_keyring 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3129581 ']' 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3129581 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3129581 ']' 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3129581 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3129581 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3129581' 00:33:55.419 killing process with pid 3129581 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3129581 00:33:55.419 21:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3129581 00:33:56.350 21:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:56.350 21:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:56.350 21:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:56.350 21:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:33:56.351 21:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:33:56.351 21:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:56.351 21:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:33:56.351 21:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:56.351 21:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:56.351 21:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.351 21:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.351 21:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:58.884 00:33:58.884 real 0m15.646s 00:33:58.884 user 0m23.092s 00:33:58.884 sys 0m3.099s 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.884 ************************************ 00:33:58.884 END TEST nvmf_host_discovery 00:33:58.884 ************************************ 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.884 ************************************ 00:33:58.884 START TEST nvmf_host_multipath_status 00:33:58.884 ************************************ 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:58.884 * Looking for test storage... 00:33:58.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:33:58.884 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:58.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.885 --rc genhtml_branch_coverage=1 00:33:58.885 --rc genhtml_function_coverage=1 00:33:58.885 --rc genhtml_legend=1 00:33:58.885 --rc geninfo_all_blocks=1 00:33:58.885 --rc geninfo_unexecuted_blocks=1 00:33:58.885 00:33:58.885 ' 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:58.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.885 --rc genhtml_branch_coverage=1 00:33:58.885 --rc genhtml_function_coverage=1 00:33:58.885 --rc genhtml_legend=1 00:33:58.885 --rc geninfo_all_blocks=1 00:33:58.885 --rc geninfo_unexecuted_blocks=1 00:33:58.885 00:33:58.885 ' 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:58.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.885 --rc genhtml_branch_coverage=1 00:33:58.885 --rc genhtml_function_coverage=1 00:33:58.885 --rc genhtml_legend=1 00:33:58.885 --rc geninfo_all_blocks=1 00:33:58.885 --rc geninfo_unexecuted_blocks=1 00:33:58.885 00:33:58.885 ' 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:58.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.885 --rc genhtml_branch_coverage=1 00:33:58.885 --rc genhtml_function_coverage=1 00:33:58.885 --rc genhtml_legend=1 00:33:58.885 --rc geninfo_all_blocks=1 00:33:58.885 --rc geninfo_unexecuted_blocks=1 00:33:58.885 00:33:58.885 ' 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:58.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:58.885 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:58.886 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:58.886 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:58.886 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:58.886 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:58.886 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:58.886 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:58.886 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:58.886 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:58.886 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:58.886 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:58.886 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.886 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:58.886 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.886 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:58.886 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:58.886 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:33:58.886 21:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:00.788 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:00.788 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:00.788 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:00.788 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:00.788 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:00.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:00.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:34:00.789 00:34:00.789 --- 10.0.0.2 ping statistics --- 00:34:00.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.789 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:00.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:00.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:34:00.789 00:34:00.789 --- 10.0.0.1 ping statistics --- 00:34:00.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.789 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:00.789 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:01.047 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:01.047 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:01.047 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:01.047 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:01.047 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3132914 00:34:01.047 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:01.047 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3132914 00:34:01.047 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3132914 ']' 00:34:01.047 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.047 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:01.047 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.047 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:01.047 21:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:01.047 [2024-11-19 21:23:34.702214] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:34:01.047 [2024-11-19 21:23:34.702380] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:01.305 [2024-11-19 21:23:34.871976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:01.305 [2024-11-19 21:23:35.011428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:01.305 [2024-11-19 21:23:35.011510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:01.305 [2024-11-19 21:23:35.011536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:01.305 [2024-11-19 21:23:35.011560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:01.305 [2024-11-19 21:23:35.011580] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:01.305 [2024-11-19 21:23:35.014554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.305 [2024-11-19 21:23:35.014559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:02.238 21:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:02.238 21:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:02.238 21:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:02.238 21:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:02.238 21:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:02.239 21:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:02.239 21:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3132914 00:34:02.239 21:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:02.239 [2024-11-19 21:23:35.982662] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:02.239 21:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:02.805 Malloc0 00:34:02.805 21:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:03.063 21:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:03.321 21:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:03.579 [2024-11-19 21:23:37.147788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:03.579 21:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:03.838 [2024-11-19 21:23:37.420494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:03.838 21:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3133327 00:34:03.838 21:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:03.838 21:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:03.838 21:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3133327 /var/tmp/bdevperf.sock 00:34:03.838 21:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3133327 ']' 00:34:03.838 21:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:03.838 21:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:03.838 21:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:03.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:03.838 21:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:03.838 21:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:04.772 21:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:04.772 21:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:04.772 21:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:05.029 21:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:05.594 Nvme0n1 00:34:05.594 21:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:05.851 Nvme0n1 00:34:06.110 21:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:06.110 21:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:08.012 21:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:08.012 21:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:08.270 21:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:08.528 21:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:09.903 21:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:09.903 21:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:09.903 21:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:09.903 21:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:09.903 21:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:09.903 21:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:09.903 21:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:09.903 21:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:10.161 21:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:10.161 21:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:10.161 21:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.161 21:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:10.421 21:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:10.421 21:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:10.421 21:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.421 21:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:10.681 21:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:10.681 21:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:10.681 21:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.681 21:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:10.939 21:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:10.939 21:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:10.939 21:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.939 21:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:11.197 21:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:11.197 21:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:11.197 21:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:11.456 21:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:11.714 21:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:13.089 21:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:13.089 21:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:13.089 21:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.089 21:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:13.089 21:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:13.089 21:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:13.089 21:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.089 21:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:13.348 21:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:13.348 21:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:13.348 21:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.348 21:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:13.607 21:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:13.607 21:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:13.607 21:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.607 21:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:13.865 21:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:13.865 21:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:13.865 21:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.865 21:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:14.123 21:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:14.123 21:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:14.123 21:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.123 21:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:14.382 21:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:14.382 21:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:14.382 21:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:14.641 21:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:15.208 21:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:16.142 21:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:16.142 21:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:16.142 21:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.142 21:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:16.399 21:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:16.400 21:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:16.400 21:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.400 21:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:16.657 21:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:16.657 21:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:16.657 21:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.657 21:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:16.944 21:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:16.944 21:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:16.944 21:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.944 21:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:17.227 21:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.227 21:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:17.227 21:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.227 21:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:17.484 21:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.484 21:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:17.484 21:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.484 21:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:17.741 21:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.741 21:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:17.741 21:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:17.999 21:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:18.257 21:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:19.193 21:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:19.193 21:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:19.193 21:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.193 21:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:19.451 21:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:19.451 21:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:19.451 21:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.451 21:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:19.709 21:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:19.709 21:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:19.709 21:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.709 21:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:20.275 21:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:20.275 21:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:20.275 21:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.275 21:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:20.275 21:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:20.275 21:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:20.275 21:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.275 21:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:20.841 21:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:20.841 21:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:20.841 21:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.841 21:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:20.841 21:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:20.841 21:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:20.841 21:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:21.099 21:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:21.666 21:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:22.599 21:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:22.599 21:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:22.599 21:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.599 21:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:22.857 21:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:22.857 21:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:22.857 21:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.857 21:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:23.115 21:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:23.115 21:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:23.115 21:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.115 21:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:23.374 21:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:23.374 21:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:23.374 21:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.374 21:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:23.632 21:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:23.632 21:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:23.632 21:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.632 21:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:23.890 21:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:23.890 21:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:23.890 21:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.890 21:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:24.148 21:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:24.148 21:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:24.148 21:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:24.406 21:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:24.663 21:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:25.595 21:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:25.595 21:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:25.595 21:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.595 21:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:26.160 21:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:26.160 21:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:26.160 21:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.160 21:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:26.160 21:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:26.160 21:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:26.160 21:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.160 21:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:26.726 21:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:26.726 21:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:26.726 21:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.726 21:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:26.726 21:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:26.726 21:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:26.726 21:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.726 21:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:27.292 21:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:27.292 21:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:27.292 21:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.292 21:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:27.292 21:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.292 21:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:27.858 21:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:27.858 21:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:27.858 21:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:28.424 21:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:29.358 21:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:29.358 21:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:29.358 21:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.358 21:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:29.616 21:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.616 21:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:29.616 21:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.616 21:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:29.874 21:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.874 21:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:29.874 21:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.874 21:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:30.133 21:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.133 21:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:30.133 21:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.133 21:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:30.391 21:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.391 21:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:30.391 21:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.391 21:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:30.650 21:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.650 21:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:30.650 21:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.650 21:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:30.908 21:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.908 21:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:30.908 21:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:31.167 21:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:31.425 21:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:32.360 21:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:32.360 21:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:32.360 21:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.360 21:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:32.926 21:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:32.926 21:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:32.926 21:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.926 21:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:32.926 21:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.926 21:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:32.926 21:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.926 21:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:33.492 21:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.492 21:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:33.492 21:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.492 21:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:33.492 21:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.492 21:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:33.492 21:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.492 21:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:34.058 21:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.058 21:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:34.058 21:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.058 21:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:34.326 21:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.326 21:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:34.326 21:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:34.587 21:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:34.844 21:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:35.779 21:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:35.779 21:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:35.779 21:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.779 21:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:36.037 21:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.037 21:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:36.037 21:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.037 21:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:36.295 21:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.295 21:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:36.295 21:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.295 21:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:36.553 21:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.553 21:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:36.553 21:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.553 21:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:36.811 21:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.811 21:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:36.811 21:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.811 21:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:37.070 21:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.070 21:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:37.070 21:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.070 21:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:37.328 21:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.328 21:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:37.328 21:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:37.586 21:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:38.152 21:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:39.085 21:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:39.085 21:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:39.086 21:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.086 21:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:39.344 21:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.344 21:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:39.344 21:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.344 21:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:39.602 21:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:39.602 21:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:39.602 21:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.602 21:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:39.860 21:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.860 21:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:39.860 21:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.860 21:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:40.118 21:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.118 21:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:40.118 21:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.118 21:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:40.376 21:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.376 21:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:40.376 21:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.376 21:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:40.634 21:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:40.634 21:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3133327 00:34:40.634 21:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3133327 ']' 00:34:40.634 21:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3133327 00:34:40.634 21:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:40.634 21:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:40.634 21:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3133327 00:34:40.634 21:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:34:40.634 21:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:34:40.634 21:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3133327' 00:34:40.634 killing process with pid 3133327 00:34:40.634 21:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3133327 00:34:40.634 21:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3133327 00:34:40.634 { 00:34:40.634 "results": [ 00:34:40.634 { 00:34:40.634 "job": "Nvme0n1", 00:34:40.634 "core_mask": "0x4", 00:34:40.634 "workload": "verify", 00:34:40.634 "status": "terminated", 00:34:40.634 "verify_range": { 00:34:40.634 "start": 0, 00:34:40.634 "length": 16384 00:34:40.634 }, 00:34:40.634 "queue_depth": 128, 00:34:40.634 "io_size": 4096, 00:34:40.634 "runtime": 34.510074, 00:34:40.634 "iops": 5878.7761509871, 00:34:40.634 "mibps": 22.96396933979336, 00:34:40.634 "io_failed": 0, 00:34:40.634 "io_timeout": 0, 00:34:40.634 "avg_latency_us": 21739.234240896556, 00:34:40.634 "min_latency_us": 1486.6962962962964, 00:34:40.634 "max_latency_us": 4101097.2444444443 00:34:40.634 } 00:34:40.634 ], 00:34:40.634 "core_count": 1 00:34:40.634 } 00:34:41.584 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3133327 00:34:41.584 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:41.584 [2024-11-19 21:23:37.517626] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:34:41.584 [2024-11-19 21:23:37.517784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3133327 ] 00:34:41.584 [2024-11-19 21:23:37.654918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:41.584 [2024-11-19 21:23:37.778606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:41.584 Running I/O for 90 seconds... 00:34:41.584 5937.00 IOPS, 23.19 MiB/s [2024-11-19T20:24:15.379Z] 6171.00 IOPS, 24.11 MiB/s [2024-11-19T20:24:15.379Z] 6194.67 IOPS, 24.20 MiB/s [2024-11-19T20:24:15.379Z] 6247.50 IOPS, 24.40 MiB/s [2024-11-19T20:24:15.379Z] 6237.20 IOPS, 24.36 MiB/s [2024-11-19T20:24:15.379Z] 6247.67 IOPS, 24.40 MiB/s [2024-11-19T20:24:15.379Z] 6241.71 IOPS, 24.38 MiB/s [2024-11-19T20:24:15.379Z] 6224.38 IOPS, 24.31 MiB/s [2024-11-19T20:24:15.379Z] 6205.44 IOPS, 24.24 MiB/s [2024-11-19T20:24:15.379Z] 6225.50 IOPS, 24.32 MiB/s [2024-11-19T20:24:15.379Z] 6220.64 IOPS, 24.30 MiB/s [2024-11-19T20:24:15.379Z] 6218.25 IOPS, 24.29 MiB/s [2024-11-19T20:24:15.379Z] 6225.92 IOPS, 24.32 MiB/s [2024-11-19T20:24:15.379Z] 6224.07 IOPS, 24.31 MiB/s [2024-11-19T20:24:15.379Z] 6225.73 IOPS, 24.32 MiB/s [2024-11-19T20:24:15.379Z] [2024-11-19 21:23:54.855689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.584 [2024-11-19 21:23:54.855761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.855824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.855857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.855896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.855922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.855960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.855986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.856021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.856083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.856127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.856169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.856207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.856234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.856271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.856295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.856331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.856376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.856439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.856464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.856498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.856522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.856555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.856579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.856611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.856635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.856667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.856691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.856724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.856747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.856796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.856821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.856855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.856879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.856913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.856936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.856970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.856993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.857026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.857083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.857121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.857146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.857181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.857210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.857245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.857270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:41.584 [2024-11-19 21:23:54.857305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.584 [2024-11-19 21:23:54.857329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.857373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.857397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.857450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.857475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.857510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.857535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.857571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.857595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.857631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.857655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.857705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.857730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.857779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.857803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.857854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.857878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.857913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.857937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.857972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.858000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.858035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.858077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.858114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.858138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.858172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.858196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.858230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.858255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.858288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.858313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.858348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.858398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.858431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.858455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.858489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.585 [2024-11-19 21:23:54.858512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.858545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.585 [2024-11-19 21:23:54.858569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.858602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.585 [2024-11-19 21:23:54.858625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.858657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.585 [2024-11-19 21:23:54.858681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.858714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.585 [2024-11-19 21:23:54.858738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.858793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.585 [2024-11-19 21:23:54.858818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.858853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.585 [2024-11-19 21:23:54.858877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.858911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.858935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.858969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.858993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.859029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.859076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.859713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.859762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.859807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.859834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.859872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.859898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.859935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.859977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.860014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.860039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.860109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.860149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.860185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.860209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.860250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.860275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.860310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.860335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.860379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.860403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.860453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.860478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:41.585 [2024-11-19 21:23:54.860512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.585 [2024-11-19 21:23:54.860535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.860568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.860608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.860645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.860669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.861675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.861705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.861757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.861783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.861833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.861860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.861896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.861922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.861958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.861984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.862020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.862061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.862114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.862141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.862177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.862203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.862239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.862265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.862301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.862327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.862381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.862407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.862457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.862482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.862518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.862543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.862578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.862602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.862636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.862661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.862695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.862720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.862769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.862796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.862832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.862862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.862899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.862925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.862960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.862985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.863020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.863045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.863114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.863144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.863180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.863207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.863243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.586 [2024-11-19 21:23:54.863269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.863306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.863331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.863378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.863420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.863464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.863489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.863522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.863547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.863582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.863606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.863640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.863669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.863704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.863729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.863763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.863806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.863842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.863867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.863902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.863928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.863964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.863991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.864026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.586 [2024-11-19 21:23:54.864067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:41.586 [2024-11-19 21:23:54.864132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.864160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.864196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.864221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.864257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.864282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.865236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.865270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.865332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.587 [2024-11-19 21:23:54.865368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.865405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.865431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.865495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.865522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.865558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.865583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.865618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.865643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.865677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.865703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.865738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.865779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.865829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.865856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.865892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.865917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.865970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.865997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.866035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.866080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.866120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.866146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.866182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.866208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.866259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.866286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.866330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.866368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.866420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.866443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.866478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.866502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.866536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.866561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.866596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.866620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.866654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.866678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.866712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.866737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.866771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.866795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.866830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.866866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.866903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.866928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.866962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.866988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.867022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.867061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.867110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.867142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.867180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.867208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.867245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.867271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.867308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.867334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.867394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.867419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.867454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.867478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.867512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.867537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.867571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.867595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.867630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.587 [2024-11-19 21:23:54.867668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:41.587 [2024-11-19 21:23:54.867705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.588 [2024-11-19 21:23:54.867750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.867794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.588 [2024-11-19 21:23:54.867820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.867856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.588 [2024-11-19 21:23:54.867882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.867918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.588 [2024-11-19 21:23:54.867949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.867987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.588 [2024-11-19 21:23:54.868030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.868099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.588 [2024-11-19 21:23:54.868151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.868187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.868213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.868249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.868274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.868309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.868334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.868377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.868420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.868473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.868499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.868536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.868561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.868596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.868621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.868656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.588 [2024-11-19 21:23:54.868681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.868732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.588 [2024-11-19 21:23:54.868758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.869546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.588 [2024-11-19 21:23:54.869580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.869630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.869658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.869696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.869722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.869759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.869786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.869841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.869868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.869919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.869945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.869979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.870003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.870039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.870095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.870138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.870164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.870200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.870225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.870260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.870287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.870322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.870347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.870407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.870432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.870472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.870498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.870533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.870557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.870590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.870615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.870649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.870673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:41.588 [2024-11-19 21:23:54.870708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.588 [2024-11-19 21:23:54.870734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.870769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.589 [2024-11-19 21:23:54.870793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.870828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.589 [2024-11-19 21:23:54.870852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.870886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.589 [2024-11-19 21:23:54.870910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.870944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.589 [2024-11-19 21:23:54.870968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.871002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.589 [2024-11-19 21:23:54.871026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.871085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.589 [2024-11-19 21:23:54.871112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.871166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.871193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.871229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.871261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.871299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.871325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.871369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.871411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.871461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.871488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.871523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.871563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.871600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.871626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.871662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.871690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.871725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.871751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.871787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.871819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.871858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.871883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.871918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.871943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.871978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.872017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.872067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.872122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.872176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.872202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.872239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.872265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.872300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.872325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.872360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.872395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.872430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.872471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.872506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.872531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.872565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.872590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.872625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.872650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.872685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.872709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.872744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.872769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.872802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.872827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.872861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.872891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.872927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.872951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.872985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.873010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.873044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.873100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.873138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.873164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.873198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.873224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:41.589 [2024-11-19 21:23:54.873259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.589 [2024-11-19 21:23:54.873286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.873322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.873346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.873382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.873407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.873457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.873483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.873518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.873543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.873577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.873603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.873638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.590 [2024-11-19 21:23:54.873662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.873701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.873728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.873763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.873788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.873822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.873846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.873880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.873906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.873940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.873965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.873999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.874024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.874083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.874112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.874157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.874184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.874219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.874244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.874280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.874307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.874342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.874384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.874420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.874445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.874484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.874510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.874545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.874570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.875545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.875579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.875621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.875648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.875686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.590 [2024-11-19 21:23:54.875714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.875752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.875778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.875830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.875857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.875894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.875919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.875955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.875987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.876022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.876047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.876113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.876157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.876194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.876219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.876255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.876287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.876342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.876378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.876431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.876458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.876494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.876519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.876555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.876581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.876632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.876658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.876694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.876718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.876753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.876777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:41.590 [2024-11-19 21:23:54.876812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.590 [2024-11-19 21:23:54.876837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.876871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.876897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.876932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.876956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.876991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.877017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.877061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.877117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.877157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.877184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.877221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.877262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.877301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.877328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.877375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.877416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.877451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.877477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.877513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.877537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.877572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.877597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.877630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.877655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.877691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.877715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.877749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.877773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.877808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.877834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.877868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.877892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.877931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.877958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.877993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.878034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.878105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.878134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.878170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.878198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.878236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.878262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.878298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.878326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.878389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.878416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.878468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.878493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.878527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.591 [2024-11-19 21:23:54.878553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.878588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.591 [2024-11-19 21:23:54.878612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.878646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.591 [2024-11-19 21:23:54.878687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.878724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.591 [2024-11-19 21:23:54.878751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.878792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.591 [2024-11-19 21:23:54.878817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.878852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.591 [2024-11-19 21:23:54.878879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.878929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.591 [2024-11-19 21:23:54.878958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.879012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.879038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.879812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.879848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.879897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.591 [2024-11-19 21:23:54.879924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.879961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.591 [2024-11-19 21:23:54.879987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.880025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.591 [2024-11-19 21:23:54.880061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.880109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.591 [2024-11-19 21:23:54.880136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.880173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.591 [2024-11-19 21:23:54.880200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:41.591 [2024-11-19 21:23:54.880237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.591 [2024-11-19 21:23:54.880263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.880300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.592 [2024-11-19 21:23:54.880326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.880377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.592 [2024-11-19 21:23:54.880407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.880460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.592 [2024-11-19 21:23:54.880483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.880518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.592 [2024-11-19 21:23:54.880542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.880576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.592 [2024-11-19 21:23:54.880599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.880633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.592 [2024-11-19 21:23:54.880657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.880690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.592 [2024-11-19 21:23:54.880715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.880748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.592 [2024-11-19 21:23:54.880772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.880804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.592 [2024-11-19 21:23:54.880828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.880863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.592 [2024-11-19 21:23:54.880886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.880919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.592 [2024-11-19 21:23:54.880943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.880976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.592 [2024-11-19 21:23:54.881001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.881034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.592 [2024-11-19 21:23:54.881093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.881135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.592 [2024-11-19 21:23:54.881166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.881204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.592 [2024-11-19 21:23:54.881230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.881268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.592 [2024-11-19 21:23:54.881293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.881330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.592 [2024-11-19 21:23:54.881381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.881418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.592 [2024-11-19 21:23:54.881442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.881476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.592 [2024-11-19 21:23:54.881501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.881535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.592 [2024-11-19 21:23:54.881559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.881593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.592 [2024-11-19 21:23:54.881616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.881650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.592 [2024-11-19 21:23:54.881674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.881708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.592 [2024-11-19 21:23:54.881732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.881783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.592 [2024-11-19 21:23:54.881809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.881844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.592 [2024-11-19 21:23:54.881883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.881920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.592 [2024-11-19 21:23:54.881947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.881988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.592 [2024-11-19 21:23:54.882019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.882088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.592 [2024-11-19 21:23:54.882118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.882157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.592 [2024-11-19 21:23:54.882183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.882220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.592 [2024-11-19 21:23:54.882246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.882283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.592 [2024-11-19 21:23:54.882323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.882371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.592 [2024-11-19 21:23:54.882410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.882444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.592 [2024-11-19 21:23:54.882468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.882502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.592 [2024-11-19 21:23:54.882526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.882559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.592 [2024-11-19 21:23:54.882591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.882641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.592 [2024-11-19 21:23:54.882669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.882705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.592 [2024-11-19 21:23:54.882728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.882762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.592 [2024-11-19 21:23:54.882796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:41.592 [2024-11-19 21:23:54.882834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.882859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.882893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.882917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.882949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.882973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.883006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.883030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.883091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.883120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.883164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.883191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.883228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.883253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.883289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.883316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.883352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.883403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.883438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.883462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.883495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.883519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.883552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.883575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.883613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.883637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.883670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.883694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.883727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.883761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.883794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.883817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.883852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.883875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.883909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.593 [2024-11-19 21:23:54.883933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.883965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.883988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.884021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.884044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.884106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.884133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.884170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.884196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.884232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.884258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.884294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.884320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.884383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.884414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.884464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.884489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.884521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.884547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.884580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.884603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.884637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.884661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.884694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.884718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.884752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.884780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.885781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.885830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.885878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.885904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.885942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.593 [2024-11-19 21:23:54.885968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:41.593 [2024-11-19 21:23:54.886007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.593 [2024-11-19 21:23:54.886033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.886095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.886138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.886175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.886207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.886245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.886271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.886308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.886333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.886379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.886405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.886457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.886496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.886541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.886565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.886615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.886640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.886675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.886699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.886734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.886759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.886793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.886818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.886862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.886886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.886935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.886959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.886992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.887016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.887078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.887107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.887145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.887171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.887208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.887234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.887271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.887297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.887334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.887367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.887417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.887442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.887475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.887499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.887532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.887568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.887605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.887630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.887664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.887688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.887721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.887745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.887779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.887802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.887839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.887863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.887897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.887920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.887953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.887977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.888011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.888035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.888095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.888123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.888159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.888186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.888223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.888249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.888286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.888312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.888349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.888399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.888437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.888480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.888518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.888544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.888581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.888608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:41.594 [2024-11-19 21:23:54.888645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.594 [2024-11-19 21:23:54.888676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.888714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.595 [2024-11-19 21:23:54.888740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.888788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.888814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.888851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.888876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.888913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.888941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.888979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.889005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.889051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.889087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.889126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.889153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.889190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.889216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.889957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.595 [2024-11-19 21:23:54.889990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.890034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.595 [2024-11-19 21:23:54.890082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.890124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.595 [2024-11-19 21:23:54.890152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.890189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.890225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.890264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.890291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.890328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.890369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.890420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.890446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.890480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.890504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.890537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.890560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.890594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.890618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.890669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.890721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.890758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.890810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.890858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.890884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.890921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.890946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.890983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.891009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.891046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.891080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.891125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.891152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.891197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.891223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.891260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.891286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.891323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.891357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.891394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.891420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.891456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.891482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.891534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.891561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.891612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.891637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.891671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.891695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.891729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.595 [2024-11-19 21:23:54.891763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.891798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.595 [2024-11-19 21:23:54.891822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.891856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.595 [2024-11-19 21:23:54.891881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.891919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.595 [2024-11-19 21:23:54.891944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.891993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.595 [2024-11-19 21:23:54.892018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:41.595 [2024-11-19 21:23:54.892077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.892105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.892153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.892179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.892215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.892241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.892278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.892304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.892340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.892388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.892448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.892473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.892523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.892551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.892604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.892631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.892668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.892734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.892783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.892809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.892846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.892879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.892918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.892944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.892980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.893006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.893042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.893085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.893135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.893161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.893197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.893223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.893258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.893285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.893321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.893347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.893384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.893410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.893470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.893495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.893529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.893553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.893587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.893612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.893647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.893677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.893713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.893737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.893771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.893813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.893849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.893892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.893930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.893957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.893994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.894020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.894066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.894101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.894138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.894164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.894201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.894227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.894263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.894289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.894326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.894377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.894413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.596 [2024-11-19 21:23:54.894454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.894506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.894538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.894577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.894603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.894639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.894664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.894700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.894727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.894764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.596 [2024-11-19 21:23:54.894797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:41.596 [2024-11-19 21:23:54.894833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.894861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.894898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.894923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.894960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.894985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.895021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.895058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.895103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.895130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.895166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.895192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.895237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.895264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.896295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.896328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.896386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.896414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.896460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.896486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.896547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.896589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.896625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.597 [2024-11-19 21:23:54.896666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.896703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.896728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.896762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.896787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.896822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.896846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.896891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.896915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.896974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.896998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.897032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.897093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.897135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.897161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.897197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.897223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.897265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.897291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.897328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.897354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.897414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.897438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.897472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.897495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.897546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.897572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.897607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.897632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.897666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.897690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.897725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.897749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.897784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.897824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.897860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.897883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.897918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.897941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.897974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.897998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.898032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.898096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:41.597 [2024-11-19 21:23:54.898136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.597 [2024-11-19 21:23:54.898175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.898214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.598 [2024-11-19 21:23:54.898239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.898275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.598 [2024-11-19 21:23:54.898300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.898335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.598 [2024-11-19 21:23:54.898385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.898420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.598 [2024-11-19 21:23:54.898460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.898494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.598 [2024-11-19 21:23:54.898517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.898551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.598 [2024-11-19 21:23:54.898576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.898609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.598 [2024-11-19 21:23:54.898633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.898666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.598 [2024-11-19 21:23:54.898689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.898723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.598 [2024-11-19 21:23:54.898747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.898780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.598 [2024-11-19 21:23:54.898804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.898838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.598 [2024-11-19 21:23:54.898867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.898902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.598 [2024-11-19 21:23:54.898926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.898959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.598 [2024-11-19 21:23:54.898983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.899015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.598 [2024-11-19 21:23:54.899039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.899098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.598 [2024-11-19 21:23:54.899124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.899158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.598 [2024-11-19 21:23:54.899182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.899216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.598 [2024-11-19 21:23:54.899241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.899297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.598 [2024-11-19 21:23:54.899340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.899387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.598 [2024-11-19 21:23:54.899430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.899467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.598 [2024-11-19 21:23:54.899492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.899529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.598 [2024-11-19 21:23:54.899555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.899592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.598 [2024-11-19 21:23:54.899631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.899668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.598 [2024-11-19 21:23:54.899708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.899748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.598 [2024-11-19 21:23:54.899771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.900495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.598 [2024-11-19 21:23:54.900544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.900587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.598 [2024-11-19 21:23:54.900615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.900652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.598 [2024-11-19 21:23:54.900679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.900716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.598 [2024-11-19 21:23:54.900758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.900795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.598 [2024-11-19 21:23:54.900836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.900887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.598 [2024-11-19 21:23:54.900912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.900947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.598 [2024-11-19 21:23:54.900972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.901006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.598 [2024-11-19 21:23:54.901029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.901091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.598 [2024-11-19 21:23:54.901143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.901182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.598 [2024-11-19 21:23:54.901206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.901242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.598 [2024-11-19 21:23:54.901267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.901323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.598 [2024-11-19 21:23:54.901350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.901403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.598 [2024-11-19 21:23:54.901428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.901467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.598 [2024-11-19 21:23:54.901491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:41.598 [2024-11-19 21:23:54.901526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.598 [2024-11-19 21:23:54.901550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.901584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.599 [2024-11-19 21:23:54.901608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.901659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.599 [2024-11-19 21:23:54.901683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.901716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.599 [2024-11-19 21:23:54.901740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.901773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.599 [2024-11-19 21:23:54.901797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.901831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.599 [2024-11-19 21:23:54.901854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.901887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.599 [2024-11-19 21:23:54.901911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.901944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.599 [2024-11-19 21:23:54.901968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.902001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.599 [2024-11-19 21:23:54.902024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.902082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.599 [2024-11-19 21:23:54.902116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.902153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.599 [2024-11-19 21:23:54.902178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.902213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.599 [2024-11-19 21:23:54.902237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.902270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.599 [2024-11-19 21:23:54.902294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.902329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.902384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.902437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.902463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.902498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.902522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.902557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.902583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.902618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.902643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.902692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.902716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.902749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.902773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.902806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.902831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.902865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.902893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.902929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.902952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.902986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.903009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.903042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.903088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.903126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.903163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.903201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.903226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.903262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.903287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.903323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.903372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.903407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.903430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.903464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.903489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.903522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.903545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.903578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.903601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.903635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.903659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.903697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.903721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.903754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.903778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.903812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.903836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.903869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.903892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:41.599 [2024-11-19 21:23:54.903926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.599 [2024-11-19 21:23:54.903950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.903983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.904007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.904041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.904089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.904143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.904169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.904204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.904231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.904266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.904292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.904328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.904364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.904415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.904455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.904495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.904520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.904554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.904578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.904611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.904636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.904670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.904693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.904726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.600 [2024-11-19 21:23:54.904750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.904784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.904808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.904842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.904866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.904899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.904923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.904957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.904981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.905015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.905038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.905096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.905121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.905157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.905182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.905220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.905247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.905282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.905314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.905349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.905391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.905425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.905449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.906449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.906498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.906552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.906581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.906619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.906646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.906697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.906725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.906776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.906802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.906839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.600 [2024-11-19 21:23:54.906864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.906898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.906922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.906957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.906981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.907017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.907079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.907136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.907163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.907214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.907241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.907277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.907302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.907338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.907388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.907425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.907450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.907484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.600 [2024-11-19 21:23:54.907524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:41.600 [2024-11-19 21:23:54.907560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.907585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.907635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.907660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.907695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.907721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.907756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.907788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.907834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.907860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.907895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.907939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.907975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.908000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.908033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.908080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.908121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.908147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.908182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.908208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.908244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.908269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.908305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.908330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.908390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.908426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.908462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.908486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.908520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.908544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.908577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.908600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.908634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.908657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.908691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.908714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.908752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.908793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.908830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.908854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.908889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.908913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.908946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.908970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.909005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.909029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.909083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.909127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.909167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.909201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.909239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.909265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.909301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.909327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.909378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.909406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.909463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.909492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.909545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.909569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.909610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.909637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.909674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.601 [2024-11-19 21:23:54.909699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.909751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.601 [2024-11-19 21:23:54.909777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.909812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.601 [2024-11-19 21:23:54.909837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.909871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.601 [2024-11-19 21:23:54.909896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.909932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.601 [2024-11-19 21:23:54.909956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.910322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.601 [2024-11-19 21:23:54.910356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.910432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.601 [2024-11-19 21:23:54.910464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:41.601 [2024-11-19 21:23:54.910522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.601 [2024-11-19 21:23:54.910549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.910589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.602 [2024-11-19 21:23:54.910631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.910670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.602 [2024-11-19 21:23:54.910695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.910747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.910788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.910828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.910859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.910898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.910923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.910961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.910987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.911026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.911065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.911131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.911158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.911197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.911222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.911259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.911283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.911321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.911347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.911385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.911425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.911463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.911487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.911524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.911548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.911584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.911608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.911644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.911673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.911712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.911736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.911790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.911816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.911854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.911879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.911917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.911943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.911981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.912005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.912042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.912092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.912135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.912161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.912200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.912226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.912265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.602 [2024-11-19 21:23:54.912290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.912328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.602 [2024-11-19 21:23:54.912355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.912407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.602 [2024-11-19 21:23:54.912433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.912470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.602 [2024-11-19 21:23:54.912494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.912535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.602 [2024-11-19 21:23:54.912560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.912597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.602 [2024-11-19 21:23:54.912621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.912658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.602 [2024-11-19 21:23:54.912682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.912719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.602 [2024-11-19 21:23:54.912742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.912796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.602 [2024-11-19 21:23:54.912823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.912860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.602 [2024-11-19 21:23:54.912884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:41.602 [2024-11-19 21:23:54.912921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.602 [2024-11-19 21:23:54.912946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.912984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.913009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.913047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.913095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.913151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.913204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.913245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.913270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.913308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.913334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.913378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.913404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.913456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.913482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.913520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.913544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.913581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.913604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.913641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.913665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.913703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.913727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.913780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.913807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.913846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.913870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.913908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.913934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.913971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.913995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.914033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.914060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.914107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.914132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.914175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.914201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.914240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.914264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.914302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.914327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.914365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.914407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.914445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.914468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.914506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.914531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.914567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.914591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.914628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.914653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.914689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.914713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.914764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.914791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.914829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.603 [2024-11-19 21:23:54.914854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.914891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.914918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.914963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.914994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.915033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.915083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.915139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.915167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.915206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.915230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.915268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.915294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.915332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.915357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.915395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.915436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.915474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.915498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.915536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.603 [2024-11-19 21:23:54.915561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:41.603 [2024-11-19 21:23:54.915793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.604 [2024-11-19 21:23:54.915824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:41.604 5857.94 IOPS, 22.88 MiB/s [2024-11-19T20:24:15.399Z] 5513.35 IOPS, 21.54 MiB/s [2024-11-19T20:24:15.399Z] 5207.06 IOPS, 20.34 MiB/s [2024-11-19T20:24:15.399Z] 4933.00 IOPS, 19.27 MiB/s [2024-11-19T20:24:15.399Z] 4960.20 IOPS, 19.38 MiB/s [2024-11-19T20:24:15.399Z] 5001.19 IOPS, 19.54 MiB/s [2024-11-19T20:24:15.399Z] 5058.32 IOPS, 19.76 MiB/s [2024-11-19T20:24:15.399Z] 5210.61 IOPS, 20.35 MiB/s [2024-11-19T20:24:15.399Z] 5350.75 IOPS, 20.90 MiB/s [2024-11-19T20:24:15.399Z] 5481.88 IOPS, 21.41 MiB/s [2024-11-19T20:24:15.399Z] 5508.15 IOPS, 21.52 MiB/s [2024-11-19T20:24:15.399Z] 5525.63 IOPS, 21.58 MiB/s [2024-11-19T20:24:15.399Z] 5542.57 IOPS, 21.65 MiB/s [2024-11-19T20:24:15.399Z] 5588.10 IOPS, 21.83 MiB/s [2024-11-19T20:24:15.399Z] 5689.63 IOPS, 22.23 MiB/s [2024-11-19T20:24:15.399Z] 5780.13 IOPS, 22.58 MiB/s [2024-11-19T20:24:15.399Z] [2024-11-19 21:24:11.626807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.604 [2024-11-19 21:24:11.626925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.627057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.604 [2024-11-19 21:24:11.627113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.627157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.604 [2024-11-19 21:24:11.627183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.627224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.604 [2024-11-19 21:24:11.627250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.627286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.604 [2024-11-19 21:24:11.627312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.627350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.604 [2024-11-19 21:24:11.627391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.627431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.604 [2024-11-19 21:24:11.627455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.627491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.604 [2024-11-19 21:24:11.627515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.627549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.604 [2024-11-19 21:24:11.627573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.627608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.604 [2024-11-19 21:24:11.627632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.627667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.604 [2024-11-19 21:24:11.627691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.627746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.604 [2024-11-19 21:24:11.627771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.627807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.604 [2024-11-19 21:24:11.627831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.627866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.604 [2024-11-19 21:24:11.627899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.627936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.604 [2024-11-19 21:24:11.627961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.627997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.604 [2024-11-19 21:24:11.628021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.628056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.604 [2024-11-19 21:24:11.628105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.628147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.604 [2024-11-19 21:24:11.628173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.628210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.604 [2024-11-19 21:24:11.628235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.628271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.604 [2024-11-19 21:24:11.628297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.628333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.604 [2024-11-19 21:24:11.628359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.628410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.604 [2024-11-19 21:24:11.628436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.628471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.604 [2024-11-19 21:24:11.628496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.628530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.604 [2024-11-19 21:24:11.628554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.628589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.604 [2024-11-19 21:24:11.628613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.628648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.604 [2024-11-19 21:24:11.628678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.628714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.604 [2024-11-19 21:24:11.628738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.628772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.604 [2024-11-19 21:24:11.628797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.628832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.604 [2024-11-19 21:24:11.628857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.628892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.604 [2024-11-19 21:24:11.628917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.628952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.604 [2024-11-19 21:24:11.628977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.629012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.604 [2024-11-19 21:24:11.629036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.629094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.604 [2024-11-19 21:24:11.629121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.629159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.604 [2024-11-19 21:24:11.629185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.629221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.604 [2024-11-19 21:24:11.629246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:41.604 [2024-11-19 21:24:11.629281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.605 [2024-11-19 21:24:11.629307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.629343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.605 [2024-11-19 21:24:11.629368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.629421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.605 [2024-11-19 21:24:11.629468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.629507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.605 [2024-11-19 21:24:11.629533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.629569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.605 [2024-11-19 21:24:11.629594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.629630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.605 [2024-11-19 21:24:11.629655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.629692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.605 [2024-11-19 21:24:11.629717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.629753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.605 [2024-11-19 21:24:11.629793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.629830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.605 [2024-11-19 21:24:11.629854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.632613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.605 [2024-11-19 21:24:11.632650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.632711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.605 [2024-11-19 21:24:11.632739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.632792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.605 [2024-11-19 21:24:11.632819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.632856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.605 [2024-11-19 21:24:11.632882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.632919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.605 [2024-11-19 21:24:11.632945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.632981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.605 [2024-11-19 21:24:11.633007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.633050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.605 [2024-11-19 21:24:11.633099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.633155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.605 [2024-11-19 21:24:11.633181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.633219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.605 [2024-11-19 21:24:11.633245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.633282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.605 [2024-11-19 21:24:11.633307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.633343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.605 [2024-11-19 21:24:11.633368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.633404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.605 [2024-11-19 21:24:11.633446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.633485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.605 [2024-11-19 21:24:11.633512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.633549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.605 [2024-11-19 21:24:11.633575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.633613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.605 [2024-11-19 21:24:11.633639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.633677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.605 [2024-11-19 21:24:11.633703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.633740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.605 [2024-11-19 21:24:11.633766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.633804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.605 [2024-11-19 21:24:11.633831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.633873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:41.605 [2024-11-19 21:24:11.633900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.633938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.605 [2024-11-19 21:24:11.633964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.634002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.605 [2024-11-19 21:24:11.634029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.634067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.605 [2024-11-19 21:24:11.634101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.635521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.605 [2024-11-19 21:24:11.635558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.635620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.605 [2024-11-19 21:24:11.635648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.635686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.605 [2024-11-19 21:24:11.635712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:41.605 [2024-11-19 21:24:11.635750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:41.605 [2024-11-19 21:24:11.635777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:41.605 5849.06 IOPS, 22.85 MiB/s [2024-11-19T20:24:15.400Z] 5865.00 IOPS, 22.91 MiB/s [2024-11-19T20:24:15.400Z] 5875.68 IOPS, 22.95 MiB/s [2024-11-19T20:24:15.400Z] Received shutdown signal, test time was about 34.510891 seconds 00:34:41.605 00:34:41.605 Latency(us) 00:34:41.605 [2024-11-19T20:24:15.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.605 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:41.605 Verification LBA range: start 0x0 length 0x4000 00:34:41.605 Nvme0n1 : 34.51 5878.78 22.96 0.00 0.00 21739.23 1486.70 4101097.24 00:34:41.605 [2024-11-19T20:24:15.400Z] =================================================================================================================== 00:34:41.605 [2024-11-19T20:24:15.400Z] Total : 5878.78 22.96 0.00 0.00 21739.23 1486.70 4101097.24 00:34:41.605 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:41.864 rmmod nvme_tcp 00:34:41.864 rmmod nvme_fabrics 00:34:41.864 rmmod nvme_keyring 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3132914 ']' 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3132914 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3132914 ']' 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3132914 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3132914 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3132914' 00:34:41.864 killing process with pid 3132914 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3132914 00:34:41.864 21:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3132914 00:34:43.240 21:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:43.240 21:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:43.240 21:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:43.240 21:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:34:43.240 21:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:34:43.240 21:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:43.240 21:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:34:43.240 21:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:43.240 21:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:43.240 21:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:43.240 21:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:43.240 21:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.145 21:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:45.145 00:34:45.145 real 0m46.677s 00:34:45.145 user 2m18.993s 00:34:45.145 sys 0m11.180s 00:34:45.145 21:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:45.145 21:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:45.145 ************************************ 00:34:45.145 END TEST nvmf_host_multipath_status 00:34:45.145 ************************************ 00:34:45.145 21:24:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:45.145 21:24:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:45.145 21:24:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:45.145 21:24:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.145 ************************************ 00:34:45.145 START TEST nvmf_discovery_remove_ifc 00:34:45.145 ************************************ 00:34:45.145 21:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:45.404 * Looking for test storage... 00:34:45.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:45.404 21:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:45.404 21:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:34:45.404 21:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:34:45.404 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:45.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.405 --rc genhtml_branch_coverage=1 00:34:45.405 --rc genhtml_function_coverage=1 00:34:45.405 --rc genhtml_legend=1 00:34:45.405 --rc geninfo_all_blocks=1 00:34:45.405 --rc geninfo_unexecuted_blocks=1 00:34:45.405 00:34:45.405 ' 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:45.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.405 --rc genhtml_branch_coverage=1 00:34:45.405 --rc genhtml_function_coverage=1 00:34:45.405 --rc genhtml_legend=1 00:34:45.405 --rc geninfo_all_blocks=1 00:34:45.405 --rc geninfo_unexecuted_blocks=1 00:34:45.405 00:34:45.405 ' 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:45.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.405 --rc genhtml_branch_coverage=1 00:34:45.405 --rc genhtml_function_coverage=1 00:34:45.405 --rc genhtml_legend=1 00:34:45.405 --rc geninfo_all_blocks=1 00:34:45.405 --rc geninfo_unexecuted_blocks=1 00:34:45.405 00:34:45.405 ' 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:45.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.405 --rc genhtml_branch_coverage=1 00:34:45.405 --rc genhtml_function_coverage=1 00:34:45.405 --rc genhtml_legend=1 00:34:45.405 --rc geninfo_all_blocks=1 00:34:45.405 --rc geninfo_unexecuted_blocks=1 00:34:45.405 00:34:45.405 ' 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:45.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:45.405 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:34:45.406 21:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:47.337 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:47.337 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:47.337 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:47.337 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:47.337 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:47.338 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:47.338 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:47.338 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:47.338 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:47.338 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:47.338 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:47.338 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:47.338 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:47.338 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:47.338 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:47.338 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:47.338 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:47.338 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:47.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:47.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:34:47.597 00:34:47.597 --- 10.0.0.2 ping statistics --- 00:34:47.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.597 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:47.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:47.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:34:47.597 00:34:47.597 --- 10.0.0.1 ping statistics --- 00:34:47.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:47.597 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3140668 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3140668 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3140668 ']' 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:47.597 21:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:47.856 [2024-11-19 21:24:21.459005] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:34:47.856 [2024-11-19 21:24:21.459172] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:47.856 [2024-11-19 21:24:21.609302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:48.114 [2024-11-19 21:24:21.745639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:48.114 [2024-11-19 21:24:21.745733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:48.114 [2024-11-19 21:24:21.745758] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:48.114 [2024-11-19 21:24:21.745782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:48.114 [2024-11-19 21:24:21.745801] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:48.114 [2024-11-19 21:24:21.747435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:48.681 21:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:48.681 21:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:48.681 21:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:48.681 21:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:48.681 21:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:48.681 21:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:48.681 21:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:48.681 21:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.681 21:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:48.681 [2024-11-19 21:24:22.441675] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:48.681 [2024-11-19 21:24:22.449958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:48.681 null0 00:34:48.939 [2024-11-19 21:24:22.481868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:48.939 21:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.939 21:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3140818 00:34:48.939 21:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:48.939 21:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3140818 /tmp/host.sock 00:34:48.939 21:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3140818 ']' 00:34:48.939 21:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:48.939 21:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:48.939 21:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:48.939 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:48.939 21:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:48.939 21:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:48.939 [2024-11-19 21:24:22.591600] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:34:48.939 [2024-11-19 21:24:22.591751] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3140818 ] 00:34:48.939 [2024-11-19 21:24:22.726291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:49.197 [2024-11-19 21:24:22.860237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:49.764 21:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:49.764 21:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:49.764 21:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:49.764 21:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:49.764 21:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.764 21:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:49.764 21:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.764 21:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:49.764 21:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.764 21:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:50.330 21:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.330 21:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:50.330 21:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.330 21:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:51.263 [2024-11-19 21:24:24.904192] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:51.263 [2024-11-19 21:24:24.904269] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:51.263 [2024-11-19 21:24:24.904311] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:51.263 [2024-11-19 21:24:25.030735] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:51.522 [2024-11-19 21:24:25.213262] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:51.522 [2024-11-19 21:24:25.214847] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2c80:1 started. 00:34:51.522 [2024-11-19 21:24:25.217232] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:51.522 [2024-11-19 21:24:25.217308] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:51.522 [2024-11-19 21:24:25.217415] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:51.522 [2024-11-19 21:24:25.217459] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:51.522 [2024-11-19 21:24:25.217519] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:51.522 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.522 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:51.522 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:51.522 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:51.522 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:51.522 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.522 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:51.522 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:51.522 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:51.522 [2024-11-19 21:24:25.222934] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2c80 was disconnected and freed. delete nvme_qpair. 00:34:51.522 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.522 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:51.522 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:51.522 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:51.779 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:51.779 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:51.779 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:51.779 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:51.779 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.779 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:51.779 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:51.779 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:51.780 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.780 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:51.780 21:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:52.713 21:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:52.713 21:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:52.713 21:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:52.713 21:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.713 21:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:52.713 21:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:52.713 21:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:52.713 21:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.713 21:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:52.713 21:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:53.646 21:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:53.646 21:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:53.646 21:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:53.646 21:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.646 21:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:53.646 21:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:53.646 21:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:53.646 21:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.904 21:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:53.904 21:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:54.837 21:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:54.837 21:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:54.837 21:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.837 21:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:54.837 21:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:54.837 21:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:54.837 21:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:54.837 21:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.837 21:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:54.837 21:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:55.770 21:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:55.770 21:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:55.770 21:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:55.770 21:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.770 21:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:55.770 21:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:55.770 21:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:55.770 21:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.770 21:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:55.770 21:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:57.143 21:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:57.143 21:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:57.143 21:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.143 21:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:57.143 21:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:57.143 21:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:57.143 21:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:57.143 21:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.143 21:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:57.143 21:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:57.143 [2024-11-19 21:24:30.658766] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:57.143 [2024-11-19 21:24:30.658915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.143 [2024-11-19 21:24:30.658951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.143 [2024-11-19 21:24:30.658982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.143 [2024-11-19 21:24:30.659006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.143 [2024-11-19 21:24:30.659032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.143 [2024-11-19 21:24:30.659055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.143 [2024-11-19 21:24:30.659091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.143 [2024-11-19 21:24:30.659139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.143 [2024-11-19 21:24:30.659161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.143 [2024-11-19 21:24:30.659180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.143 [2024-11-19 21:24:30.659198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:34:57.143 [2024-11-19 21:24:30.668796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:34:57.143 [2024-11-19 21:24:30.678848] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:57.143 [2024-11-19 21:24:30.678889] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:57.143 [2024-11-19 21:24:30.678914] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:57.143 [2024-11-19 21:24:30.678932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:57.143 [2024-11-19 21:24:30.679011] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:58.077 21:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:58.077 21:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:58.077 21:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:58.077 21:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.077 21:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:58.077 21:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:58.077 21:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:58.077 [2024-11-19 21:24:31.743123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:58.077 [2024-11-19 21:24:31.743194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:34:58.077 [2024-11-19 21:24:31.743229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:34:58.077 [2024-11-19 21:24:31.743283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:34:58.077 [2024-11-19 21:24:31.743970] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:34:58.077 [2024-11-19 21:24:31.744036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:58.077 [2024-11-19 21:24:31.744093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:58.077 [2024-11-19 21:24:31.744124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:58.077 [2024-11-19 21:24:31.744161] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:58.077 [2024-11-19 21:24:31.744178] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:58.077 [2024-11-19 21:24:31.744192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:58.077 [2024-11-19 21:24:31.744212] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:58.077 [2024-11-19 21:24:31.744227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:58.077 21:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.077 21:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:58.077 21:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:59.011 [2024-11-19 21:24:32.746746] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:59.011 [2024-11-19 21:24:32.746798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:59.011 [2024-11-19 21:24:32.746831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:59.011 [2024-11-19 21:24:32.746854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:59.011 [2024-11-19 21:24:32.746876] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:34:59.011 [2024-11-19 21:24:32.746898] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:59.011 [2024-11-19 21:24:32.746915] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:59.011 [2024-11-19 21:24:32.746929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:59.011 [2024-11-19 21:24:32.747003] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:59.011 [2024-11-19 21:24:32.747066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:59.011 [2024-11-19 21:24:32.747126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.011 [2024-11-19 21:24:32.747156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:59.011 [2024-11-19 21:24:32.747177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.011 [2024-11-19 21:24:32.747199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:59.011 [2024-11-19 21:24:32.747220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.011 [2024-11-19 21:24:32.747241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:59.011 [2024-11-19 21:24:32.747261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.011 [2024-11-19 21:24:32.747289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:59.011 [2024-11-19 21:24:32.747311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.011 [2024-11-19 21:24:32.747332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:34:59.011 [2024-11-19 21:24:32.747436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:34:59.011 [2024-11-19 21:24:32.748410] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:59.011 [2024-11-19 21:24:32.748461] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:34:59.011 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:59.011 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:59.011 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:59.011 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.011 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.011 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:59.011 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:59.011 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.011 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:59.011 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:59.270 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:59.270 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:59.270 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:59.270 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:59.270 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:59.270 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.270 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.270 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:59.270 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:59.270 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.270 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:59.270 21:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:00.203 21:24:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:00.203 21:24:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:00.203 21:24:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.203 21:24:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:00.203 21:24:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:00.203 21:24:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:00.203 21:24:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:00.203 21:24:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.204 21:24:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:00.204 21:24:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:01.137 [2024-11-19 21:24:34.800966] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:01.137 [2024-11-19 21:24:34.801006] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:01.137 [2024-11-19 21:24:34.801055] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:01.137 21:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:01.137 21:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:01.137 21:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:01.137 21:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.137 21:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:01.137 21:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:01.137 21:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:01.137 [2024-11-19 21:24:34.928618] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:01.137 21:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.395 21:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:01.395 21:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:01.396 [2024-11-19 21:24:35.150213] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:35:01.396 [2024-11-19 21:24:35.151686] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x6150001f3900:1 started. 00:35:01.396 [2024-11-19 21:24:35.153974] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:01.396 [2024-11-19 21:24:35.154048] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:01.396 [2024-11-19 21:24:35.154140] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:01.396 [2024-11-19 21:24:35.154180] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:01.396 [2024-11-19 21:24:35.154206] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:01.396 [2024-11-19 21:24:35.160340] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x6150001f3900 was disconnected and freed. delete nvme_qpair. 00:35:02.329 21:24:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:02.329 21:24:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:02.329 21:24:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:02.329 21:24:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.329 21:24:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.329 21:24:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:02.329 21:24:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:02.329 21:24:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.329 21:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:02.329 21:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:02.329 21:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3140818 00:35:02.329 21:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3140818 ']' 00:35:02.329 21:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3140818 00:35:02.329 21:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:02.329 21:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:02.330 21:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3140818 00:35:02.330 21:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:02.330 21:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:02.330 21:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3140818' 00:35:02.330 killing process with pid 3140818 00:35:02.330 21:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3140818 00:35:02.330 21:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3140818 00:35:03.264 21:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:03.264 21:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:03.264 21:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:35:03.264 21:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:03.264 21:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:35:03.264 21:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:03.264 21:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:03.264 rmmod nvme_tcp 00:35:03.264 rmmod nvme_fabrics 00:35:03.264 rmmod nvme_keyring 00:35:03.264 21:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:03.264 21:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:35:03.264 21:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:35:03.264 21:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3140668 ']' 00:35:03.264 21:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3140668 00:35:03.264 21:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3140668 ']' 00:35:03.264 21:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3140668 00:35:03.264 21:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:03.264 21:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:03.264 21:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3140668 00:35:03.522 21:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:03.522 21:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:03.522 21:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3140668' 00:35:03.522 killing process with pid 3140668 00:35:03.522 21:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3140668 00:35:03.522 21:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3140668 00:35:04.457 21:24:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:04.457 21:24:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:04.457 21:24:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:04.457 21:24:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:35:04.457 21:24:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:35:04.457 21:24:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:04.457 21:24:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:35:04.457 21:24:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:04.457 21:24:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:04.457 21:24:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.457 21:24:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:04.457 21:24:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.363 21:24:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:06.363 00:35:06.363 real 0m21.162s 00:35:06.363 user 0m31.083s 00:35:06.363 sys 0m3.187s 00:35:06.363 21:24:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:06.363 21:24:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:06.363 ************************************ 00:35:06.363 END TEST nvmf_discovery_remove_ifc 00:35:06.363 ************************************ 00:35:06.363 21:24:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:06.363 21:24:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:06.363 21:24:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:06.363 21:24:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.363 ************************************ 00:35:06.363 START TEST nvmf_identify_kernel_target 00:35:06.363 ************************************ 00:35:06.363 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:06.622 * Looking for test storage... 00:35:06.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:06.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.622 --rc genhtml_branch_coverage=1 00:35:06.622 --rc genhtml_function_coverage=1 00:35:06.622 --rc genhtml_legend=1 00:35:06.622 --rc geninfo_all_blocks=1 00:35:06.622 --rc geninfo_unexecuted_blocks=1 00:35:06.622 00:35:06.622 ' 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:06.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.622 --rc genhtml_branch_coverage=1 00:35:06.622 --rc genhtml_function_coverage=1 00:35:06.622 --rc genhtml_legend=1 00:35:06.622 --rc geninfo_all_blocks=1 00:35:06.622 --rc geninfo_unexecuted_blocks=1 00:35:06.622 00:35:06.622 ' 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:06.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.622 --rc genhtml_branch_coverage=1 00:35:06.622 --rc genhtml_function_coverage=1 00:35:06.622 --rc genhtml_legend=1 00:35:06.622 --rc geninfo_all_blocks=1 00:35:06.622 --rc geninfo_unexecuted_blocks=1 00:35:06.622 00:35:06.622 ' 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:06.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.622 --rc genhtml_branch_coverage=1 00:35:06.622 --rc genhtml_function_coverage=1 00:35:06.622 --rc genhtml_legend=1 00:35:06.622 --rc geninfo_all_blocks=1 00:35:06.622 --rc geninfo_unexecuted_blocks=1 00:35:06.622 00:35:06.622 ' 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.622 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:06.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:06.623 21:24:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:09.156 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:09.156 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:09.156 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:09.156 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:09.157 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:09.157 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:09.157 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:09.157 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:09.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:09.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:35:09.157 00:35:09.157 --- 10.0.0.2 ping statistics --- 00:35:09.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:09.157 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:35:09.157 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:09.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:09.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:35:09.157 00:35:09.158 --- 10.0.0.1 ping statistics --- 00:35:09.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:09.158 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:09.158 21:24:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:10.095 Waiting for block devices as requested 00:35:10.095 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:10.095 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:10.353 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:10.353 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:10.353 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:10.613 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:10.613 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:10.613 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:10.613 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:10.872 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:10.872 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:10.872 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:10.872 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:10.872 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:11.130 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:11.130 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:11.130 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:11.389 21:24:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:11.389 21:24:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:11.389 21:24:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:11.389 21:24:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:11.389 21:24:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:11.389 21:24:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:11.389 21:24:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:11.389 21:24:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:11.389 21:24:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:11.389 No valid GPT data, bailing 00:35:11.389 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:11.389 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:11.389 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:11.389 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:11.389 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:11.389 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:11.389 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:11.389 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:11.389 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:11.389 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:35:11.389 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:11.389 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:35:11.389 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:11.389 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:35:11.389 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:35:11.389 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:35:11.389 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:11.389 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:11.389 00:35:11.389 Discovery Log Number of Records 2, Generation counter 2 00:35:11.389 =====Discovery Log Entry 0====== 00:35:11.389 trtype: tcp 00:35:11.389 adrfam: ipv4 00:35:11.389 subtype: current discovery subsystem 00:35:11.389 treq: not specified, sq flow control disable supported 00:35:11.389 portid: 1 00:35:11.389 trsvcid: 4420 00:35:11.389 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:11.389 traddr: 10.0.0.1 00:35:11.389 eflags: none 00:35:11.389 sectype: none 00:35:11.389 =====Discovery Log Entry 1====== 00:35:11.389 trtype: tcp 00:35:11.389 adrfam: ipv4 00:35:11.389 subtype: nvme subsystem 00:35:11.389 treq: not specified, sq flow control disable supported 00:35:11.389 portid: 1 00:35:11.389 trsvcid: 4420 00:35:11.389 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:11.389 traddr: 10.0.0.1 00:35:11.389 eflags: none 00:35:11.389 sectype: none 00:35:11.390 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:11.390 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:11.648 ===================================================== 00:35:11.648 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:11.648 ===================================================== 00:35:11.648 Controller Capabilities/Features 00:35:11.648 ================================ 00:35:11.648 Vendor ID: 0000 00:35:11.648 Subsystem Vendor ID: 0000 00:35:11.648 Serial Number: a45c1fcec46d8fc55f6e 00:35:11.648 Model Number: Linux 00:35:11.648 Firmware Version: 6.8.9-20 00:35:11.648 Recommended Arb Burst: 0 00:35:11.648 IEEE OUI Identifier: 00 00 00 00:35:11.648 Multi-path I/O 00:35:11.648 May have multiple subsystem ports: No 00:35:11.648 May have multiple controllers: No 00:35:11.648 Associated with SR-IOV VF: No 00:35:11.648 Max Data Transfer Size: Unlimited 00:35:11.648 Max Number of Namespaces: 0 00:35:11.648 Max Number of I/O Queues: 1024 00:35:11.648 NVMe Specification Version (VS): 1.3 00:35:11.648 NVMe Specification Version (Identify): 1.3 00:35:11.648 Maximum Queue Entries: 1024 00:35:11.648 Contiguous Queues Required: No 00:35:11.648 Arbitration Mechanisms Supported 00:35:11.648 Weighted Round Robin: Not Supported 00:35:11.648 Vendor Specific: Not Supported 00:35:11.648 Reset Timeout: 7500 ms 00:35:11.648 Doorbell Stride: 4 bytes 00:35:11.648 NVM Subsystem Reset: Not Supported 00:35:11.648 Command Sets Supported 00:35:11.648 NVM Command Set: Supported 00:35:11.648 Boot Partition: Not Supported 00:35:11.648 Memory Page Size Minimum: 4096 bytes 00:35:11.648 Memory Page Size Maximum: 4096 bytes 00:35:11.648 Persistent Memory Region: Not Supported 00:35:11.648 Optional Asynchronous Events Supported 00:35:11.648 Namespace Attribute Notices: Not Supported 00:35:11.648 Firmware Activation Notices: Not Supported 00:35:11.648 ANA Change Notices: Not Supported 00:35:11.648 PLE Aggregate Log Change Notices: Not Supported 00:35:11.648 LBA Status Info Alert Notices: Not Supported 00:35:11.648 EGE Aggregate Log Change Notices: Not Supported 00:35:11.649 Normal NVM Subsystem Shutdown event: Not Supported 00:35:11.649 Zone Descriptor Change Notices: Not Supported 00:35:11.649 Discovery Log Change Notices: Supported 00:35:11.649 Controller Attributes 00:35:11.649 128-bit Host Identifier: Not Supported 00:35:11.649 Non-Operational Permissive Mode: Not Supported 00:35:11.649 NVM Sets: Not Supported 00:35:11.649 Read Recovery Levels: Not Supported 00:35:11.649 Endurance Groups: Not Supported 00:35:11.649 Predictable Latency Mode: Not Supported 00:35:11.649 Traffic Based Keep ALive: Not Supported 00:35:11.649 Namespace Granularity: Not Supported 00:35:11.649 SQ Associations: Not Supported 00:35:11.649 UUID List: Not Supported 00:35:11.649 Multi-Domain Subsystem: Not Supported 00:35:11.649 Fixed Capacity Management: Not Supported 00:35:11.649 Variable Capacity Management: Not Supported 00:35:11.649 Delete Endurance Group: Not Supported 00:35:11.649 Delete NVM Set: Not Supported 00:35:11.649 Extended LBA Formats Supported: Not Supported 00:35:11.649 Flexible Data Placement Supported: Not Supported 00:35:11.649 00:35:11.649 Controller Memory Buffer Support 00:35:11.649 ================================ 00:35:11.649 Supported: No 00:35:11.649 00:35:11.649 Persistent Memory Region Support 00:35:11.649 ================================ 00:35:11.649 Supported: No 00:35:11.649 00:35:11.649 Admin Command Set Attributes 00:35:11.649 ============================ 00:35:11.649 Security Send/Receive: Not Supported 00:35:11.649 Format NVM: Not Supported 00:35:11.649 Firmware Activate/Download: Not Supported 00:35:11.649 Namespace Management: Not Supported 00:35:11.649 Device Self-Test: Not Supported 00:35:11.649 Directives: Not Supported 00:35:11.649 NVMe-MI: Not Supported 00:35:11.649 Virtualization Management: Not Supported 00:35:11.649 Doorbell Buffer Config: Not Supported 00:35:11.649 Get LBA Status Capability: Not Supported 00:35:11.649 Command & Feature Lockdown Capability: Not Supported 00:35:11.649 Abort Command Limit: 1 00:35:11.649 Async Event Request Limit: 1 00:35:11.649 Number of Firmware Slots: N/A 00:35:11.649 Firmware Slot 1 Read-Only: N/A 00:35:11.649 Firmware Activation Without Reset: N/A 00:35:11.649 Multiple Update Detection Support: N/A 00:35:11.649 Firmware Update Granularity: No Information Provided 00:35:11.649 Per-Namespace SMART Log: No 00:35:11.649 Asymmetric Namespace Access Log Page: Not Supported 00:35:11.649 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:11.649 Command Effects Log Page: Not Supported 00:35:11.649 Get Log Page Extended Data: Supported 00:35:11.649 Telemetry Log Pages: Not Supported 00:35:11.649 Persistent Event Log Pages: Not Supported 00:35:11.649 Supported Log Pages Log Page: May Support 00:35:11.649 Commands Supported & Effects Log Page: Not Supported 00:35:11.649 Feature Identifiers & Effects Log Page:May Support 00:35:11.649 NVMe-MI Commands & Effects Log Page: May Support 00:35:11.649 Data Area 4 for Telemetry Log: Not Supported 00:35:11.649 Error Log Page Entries Supported: 1 00:35:11.649 Keep Alive: Not Supported 00:35:11.649 00:35:11.649 NVM Command Set Attributes 00:35:11.649 ========================== 00:35:11.649 Submission Queue Entry Size 00:35:11.649 Max: 1 00:35:11.649 Min: 1 00:35:11.649 Completion Queue Entry Size 00:35:11.649 Max: 1 00:35:11.649 Min: 1 00:35:11.649 Number of Namespaces: 0 00:35:11.649 Compare Command: Not Supported 00:35:11.649 Write Uncorrectable Command: Not Supported 00:35:11.649 Dataset Management Command: Not Supported 00:35:11.649 Write Zeroes Command: Not Supported 00:35:11.649 Set Features Save Field: Not Supported 00:35:11.649 Reservations: Not Supported 00:35:11.649 Timestamp: Not Supported 00:35:11.649 Copy: Not Supported 00:35:11.649 Volatile Write Cache: Not Present 00:35:11.649 Atomic Write Unit (Normal): 1 00:35:11.649 Atomic Write Unit (PFail): 1 00:35:11.649 Atomic Compare & Write Unit: 1 00:35:11.649 Fused Compare & Write: Not Supported 00:35:11.649 Scatter-Gather List 00:35:11.649 SGL Command Set: Supported 00:35:11.649 SGL Keyed: Not Supported 00:35:11.649 SGL Bit Bucket Descriptor: Not Supported 00:35:11.649 SGL Metadata Pointer: Not Supported 00:35:11.649 Oversized SGL: Not Supported 00:35:11.649 SGL Metadata Address: Not Supported 00:35:11.649 SGL Offset: Supported 00:35:11.649 Transport SGL Data Block: Not Supported 00:35:11.649 Replay Protected Memory Block: Not Supported 00:35:11.649 00:35:11.649 Firmware Slot Information 00:35:11.649 ========================= 00:35:11.649 Active slot: 0 00:35:11.649 00:35:11.649 00:35:11.649 Error Log 00:35:11.649 ========= 00:35:11.649 00:35:11.649 Active Namespaces 00:35:11.649 ================= 00:35:11.649 Discovery Log Page 00:35:11.649 ================== 00:35:11.649 Generation Counter: 2 00:35:11.649 Number of Records: 2 00:35:11.649 Record Format: 0 00:35:11.649 00:35:11.649 Discovery Log Entry 0 00:35:11.649 ---------------------- 00:35:11.649 Transport Type: 3 (TCP) 00:35:11.649 Address Family: 1 (IPv4) 00:35:11.649 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:11.649 Entry Flags: 00:35:11.649 Duplicate Returned Information: 0 00:35:11.649 Explicit Persistent Connection Support for Discovery: 0 00:35:11.649 Transport Requirements: 00:35:11.649 Secure Channel: Not Specified 00:35:11.649 Port ID: 1 (0x0001) 00:35:11.649 Controller ID: 65535 (0xffff) 00:35:11.649 Admin Max SQ Size: 32 00:35:11.649 Transport Service Identifier: 4420 00:35:11.649 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:11.649 Transport Address: 10.0.0.1 00:35:11.649 Discovery Log Entry 1 00:35:11.649 ---------------------- 00:35:11.649 Transport Type: 3 (TCP) 00:35:11.649 Address Family: 1 (IPv4) 00:35:11.649 Subsystem Type: 2 (NVM Subsystem) 00:35:11.649 Entry Flags: 00:35:11.649 Duplicate Returned Information: 0 00:35:11.649 Explicit Persistent Connection Support for Discovery: 0 00:35:11.649 Transport Requirements: 00:35:11.649 Secure Channel: Not Specified 00:35:11.649 Port ID: 1 (0x0001) 00:35:11.649 Controller ID: 65535 (0xffff) 00:35:11.649 Admin Max SQ Size: 32 00:35:11.649 Transport Service Identifier: 4420 00:35:11.649 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:11.649 Transport Address: 10.0.0.1 00:35:11.649 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:11.909 get_feature(0x01) failed 00:35:11.909 get_feature(0x02) failed 00:35:11.909 get_feature(0x04) failed 00:35:11.909 ===================================================== 00:35:11.909 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:11.909 ===================================================== 00:35:11.909 Controller Capabilities/Features 00:35:11.909 ================================ 00:35:11.909 Vendor ID: 0000 00:35:11.909 Subsystem Vendor ID: 0000 00:35:11.909 Serial Number: e177f8fb276abbdb56a9 00:35:11.909 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:11.909 Firmware Version: 6.8.9-20 00:35:11.909 Recommended Arb Burst: 6 00:35:11.909 IEEE OUI Identifier: 00 00 00 00:35:11.909 Multi-path I/O 00:35:11.909 May have multiple subsystem ports: Yes 00:35:11.909 May have multiple controllers: Yes 00:35:11.909 Associated with SR-IOV VF: No 00:35:11.909 Max Data Transfer Size: Unlimited 00:35:11.909 Max Number of Namespaces: 1024 00:35:11.909 Max Number of I/O Queues: 128 00:35:11.909 NVMe Specification Version (VS): 1.3 00:35:11.909 NVMe Specification Version (Identify): 1.3 00:35:11.909 Maximum Queue Entries: 1024 00:35:11.909 Contiguous Queues Required: No 00:35:11.909 Arbitration Mechanisms Supported 00:35:11.909 Weighted Round Robin: Not Supported 00:35:11.909 Vendor Specific: Not Supported 00:35:11.909 Reset Timeout: 7500 ms 00:35:11.909 Doorbell Stride: 4 bytes 00:35:11.909 NVM Subsystem Reset: Not Supported 00:35:11.909 Command Sets Supported 00:35:11.909 NVM Command Set: Supported 00:35:11.909 Boot Partition: Not Supported 00:35:11.909 Memory Page Size Minimum: 4096 bytes 00:35:11.909 Memory Page Size Maximum: 4096 bytes 00:35:11.909 Persistent Memory Region: Not Supported 00:35:11.909 Optional Asynchronous Events Supported 00:35:11.909 Namespace Attribute Notices: Supported 00:35:11.909 Firmware Activation Notices: Not Supported 00:35:11.909 ANA Change Notices: Supported 00:35:11.909 PLE Aggregate Log Change Notices: Not Supported 00:35:11.909 LBA Status Info Alert Notices: Not Supported 00:35:11.909 EGE Aggregate Log Change Notices: Not Supported 00:35:11.909 Normal NVM Subsystem Shutdown event: Not Supported 00:35:11.909 Zone Descriptor Change Notices: Not Supported 00:35:11.909 Discovery Log Change Notices: Not Supported 00:35:11.909 Controller Attributes 00:35:11.909 128-bit Host Identifier: Supported 00:35:11.909 Non-Operational Permissive Mode: Not Supported 00:35:11.909 NVM Sets: Not Supported 00:35:11.909 Read Recovery Levels: Not Supported 00:35:11.909 Endurance Groups: Not Supported 00:35:11.909 Predictable Latency Mode: Not Supported 00:35:11.909 Traffic Based Keep ALive: Supported 00:35:11.909 Namespace Granularity: Not Supported 00:35:11.909 SQ Associations: Not Supported 00:35:11.909 UUID List: Not Supported 00:35:11.909 Multi-Domain Subsystem: Not Supported 00:35:11.909 Fixed Capacity Management: Not Supported 00:35:11.909 Variable Capacity Management: Not Supported 00:35:11.909 Delete Endurance Group: Not Supported 00:35:11.909 Delete NVM Set: Not Supported 00:35:11.909 Extended LBA Formats Supported: Not Supported 00:35:11.909 Flexible Data Placement Supported: Not Supported 00:35:11.909 00:35:11.909 Controller Memory Buffer Support 00:35:11.909 ================================ 00:35:11.909 Supported: No 00:35:11.909 00:35:11.910 Persistent Memory Region Support 00:35:11.910 ================================ 00:35:11.910 Supported: No 00:35:11.910 00:35:11.910 Admin Command Set Attributes 00:35:11.910 ============================ 00:35:11.910 Security Send/Receive: Not Supported 00:35:11.910 Format NVM: Not Supported 00:35:11.910 Firmware Activate/Download: Not Supported 00:35:11.910 Namespace Management: Not Supported 00:35:11.910 Device Self-Test: Not Supported 00:35:11.910 Directives: Not Supported 00:35:11.910 NVMe-MI: Not Supported 00:35:11.910 Virtualization Management: Not Supported 00:35:11.910 Doorbell Buffer Config: Not Supported 00:35:11.910 Get LBA Status Capability: Not Supported 00:35:11.910 Command & Feature Lockdown Capability: Not Supported 00:35:11.910 Abort Command Limit: 4 00:35:11.910 Async Event Request Limit: 4 00:35:11.910 Number of Firmware Slots: N/A 00:35:11.910 Firmware Slot 1 Read-Only: N/A 00:35:11.910 Firmware Activation Without Reset: N/A 00:35:11.910 Multiple Update Detection Support: N/A 00:35:11.910 Firmware Update Granularity: No Information Provided 00:35:11.910 Per-Namespace SMART Log: Yes 00:35:11.910 Asymmetric Namespace Access Log Page: Supported 00:35:11.910 ANA Transition Time : 10 sec 00:35:11.910 00:35:11.910 Asymmetric Namespace Access Capabilities 00:35:11.910 ANA Optimized State : Supported 00:35:11.910 ANA Non-Optimized State : Supported 00:35:11.910 ANA Inaccessible State : Supported 00:35:11.910 ANA Persistent Loss State : Supported 00:35:11.910 ANA Change State : Supported 00:35:11.910 ANAGRPID is not changed : No 00:35:11.910 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:11.910 00:35:11.910 ANA Group Identifier Maximum : 128 00:35:11.910 Number of ANA Group Identifiers : 128 00:35:11.910 Max Number of Allowed Namespaces : 1024 00:35:11.910 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:11.910 Command Effects Log Page: Supported 00:35:11.910 Get Log Page Extended Data: Supported 00:35:11.910 Telemetry Log Pages: Not Supported 00:35:11.910 Persistent Event Log Pages: Not Supported 00:35:11.910 Supported Log Pages Log Page: May Support 00:35:11.910 Commands Supported & Effects Log Page: Not Supported 00:35:11.910 Feature Identifiers & Effects Log Page:May Support 00:35:11.910 NVMe-MI Commands & Effects Log Page: May Support 00:35:11.910 Data Area 4 for Telemetry Log: Not Supported 00:35:11.910 Error Log Page Entries Supported: 128 00:35:11.910 Keep Alive: Supported 00:35:11.910 Keep Alive Granularity: 1000 ms 00:35:11.910 00:35:11.910 NVM Command Set Attributes 00:35:11.910 ========================== 00:35:11.910 Submission Queue Entry Size 00:35:11.910 Max: 64 00:35:11.910 Min: 64 00:35:11.910 Completion Queue Entry Size 00:35:11.910 Max: 16 00:35:11.910 Min: 16 00:35:11.910 Number of Namespaces: 1024 00:35:11.910 Compare Command: Not Supported 00:35:11.910 Write Uncorrectable Command: Not Supported 00:35:11.910 Dataset Management Command: Supported 00:35:11.910 Write Zeroes Command: Supported 00:35:11.910 Set Features Save Field: Not Supported 00:35:11.910 Reservations: Not Supported 00:35:11.910 Timestamp: Not Supported 00:35:11.910 Copy: Not Supported 00:35:11.910 Volatile Write Cache: Present 00:35:11.910 Atomic Write Unit (Normal): 1 00:35:11.910 Atomic Write Unit (PFail): 1 00:35:11.910 Atomic Compare & Write Unit: 1 00:35:11.910 Fused Compare & Write: Not Supported 00:35:11.910 Scatter-Gather List 00:35:11.910 SGL Command Set: Supported 00:35:11.910 SGL Keyed: Not Supported 00:35:11.910 SGL Bit Bucket Descriptor: Not Supported 00:35:11.910 SGL Metadata Pointer: Not Supported 00:35:11.910 Oversized SGL: Not Supported 00:35:11.910 SGL Metadata Address: Not Supported 00:35:11.910 SGL Offset: Supported 00:35:11.910 Transport SGL Data Block: Not Supported 00:35:11.910 Replay Protected Memory Block: Not Supported 00:35:11.910 00:35:11.910 Firmware Slot Information 00:35:11.910 ========================= 00:35:11.910 Active slot: 0 00:35:11.910 00:35:11.910 Asymmetric Namespace Access 00:35:11.910 =========================== 00:35:11.910 Change Count : 0 00:35:11.910 Number of ANA Group Descriptors : 1 00:35:11.910 ANA Group Descriptor : 0 00:35:11.910 ANA Group ID : 1 00:35:11.910 Number of NSID Values : 1 00:35:11.910 Change Count : 0 00:35:11.910 ANA State : 1 00:35:11.910 Namespace Identifier : 1 00:35:11.910 00:35:11.910 Commands Supported and Effects 00:35:11.910 ============================== 00:35:11.910 Admin Commands 00:35:11.910 -------------- 00:35:11.910 Get Log Page (02h): Supported 00:35:11.910 Identify (06h): Supported 00:35:11.910 Abort (08h): Supported 00:35:11.910 Set Features (09h): Supported 00:35:11.910 Get Features (0Ah): Supported 00:35:11.910 Asynchronous Event Request (0Ch): Supported 00:35:11.910 Keep Alive (18h): Supported 00:35:11.910 I/O Commands 00:35:11.910 ------------ 00:35:11.910 Flush (00h): Supported 00:35:11.910 Write (01h): Supported LBA-Change 00:35:11.910 Read (02h): Supported 00:35:11.910 Write Zeroes (08h): Supported LBA-Change 00:35:11.910 Dataset Management (09h): Supported 00:35:11.910 00:35:11.910 Error Log 00:35:11.910 ========= 00:35:11.910 Entry: 0 00:35:11.910 Error Count: 0x3 00:35:11.910 Submission Queue Id: 0x0 00:35:11.910 Command Id: 0x5 00:35:11.910 Phase Bit: 0 00:35:11.910 Status Code: 0x2 00:35:11.910 Status Code Type: 0x0 00:35:11.910 Do Not Retry: 1 00:35:11.910 Error Location: 0x28 00:35:11.910 LBA: 0x0 00:35:11.910 Namespace: 0x0 00:35:11.910 Vendor Log Page: 0x0 00:35:11.910 ----------- 00:35:11.910 Entry: 1 00:35:11.910 Error Count: 0x2 00:35:11.910 Submission Queue Id: 0x0 00:35:11.910 Command Id: 0x5 00:35:11.910 Phase Bit: 0 00:35:11.910 Status Code: 0x2 00:35:11.910 Status Code Type: 0x0 00:35:11.910 Do Not Retry: 1 00:35:11.910 Error Location: 0x28 00:35:11.910 LBA: 0x0 00:35:11.910 Namespace: 0x0 00:35:11.910 Vendor Log Page: 0x0 00:35:11.910 ----------- 00:35:11.910 Entry: 2 00:35:11.910 Error Count: 0x1 00:35:11.910 Submission Queue Id: 0x0 00:35:11.910 Command Id: 0x4 00:35:11.910 Phase Bit: 0 00:35:11.910 Status Code: 0x2 00:35:11.910 Status Code Type: 0x0 00:35:11.910 Do Not Retry: 1 00:35:11.910 Error Location: 0x28 00:35:11.910 LBA: 0x0 00:35:11.910 Namespace: 0x0 00:35:11.910 Vendor Log Page: 0x0 00:35:11.910 00:35:11.910 Number of Queues 00:35:11.910 ================ 00:35:11.910 Number of I/O Submission Queues: 128 00:35:11.910 Number of I/O Completion Queues: 128 00:35:11.910 00:35:11.910 ZNS Specific Controller Data 00:35:11.910 ============================ 00:35:11.910 Zone Append Size Limit: 0 00:35:11.910 00:35:11.910 00:35:11.910 Active Namespaces 00:35:11.910 ================= 00:35:11.910 get_feature(0x05) failed 00:35:11.910 Namespace ID:1 00:35:11.910 Command Set Identifier: NVM (00h) 00:35:11.910 Deallocate: Supported 00:35:11.910 Deallocated/Unwritten Error: Not Supported 00:35:11.910 Deallocated Read Value: Unknown 00:35:11.910 Deallocate in Write Zeroes: Not Supported 00:35:11.910 Deallocated Guard Field: 0xFFFF 00:35:11.910 Flush: Supported 00:35:11.910 Reservation: Not Supported 00:35:11.910 Namespace Sharing Capabilities: Multiple Controllers 00:35:11.910 Size (in LBAs): 1953525168 (931GiB) 00:35:11.910 Capacity (in LBAs): 1953525168 (931GiB) 00:35:11.910 Utilization (in LBAs): 1953525168 (931GiB) 00:35:11.910 UUID: e7296b45-094a-403f-b3f1-4cc9cf38c8bd 00:35:11.910 Thin Provisioning: Not Supported 00:35:11.910 Per-NS Atomic Units: Yes 00:35:11.910 Atomic Boundary Size (Normal): 0 00:35:11.910 Atomic Boundary Size (PFail): 0 00:35:11.910 Atomic Boundary Offset: 0 00:35:11.910 NGUID/EUI64 Never Reused: No 00:35:11.910 ANA group ID: 1 00:35:11.910 Namespace Write Protected: No 00:35:11.910 Number of LBA Formats: 1 00:35:11.910 Current LBA Format: LBA Format #00 00:35:11.910 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:11.910 00:35:11.910 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:11.910 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:11.910 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:35:11.910 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:11.910 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:35:11.910 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:11.910 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:11.911 rmmod nvme_tcp 00:35:11.911 rmmod nvme_fabrics 00:35:11.911 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:11.911 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:35:11.911 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:35:11.911 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:11.911 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:11.911 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:11.911 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:11.911 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:35:11.911 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:35:11.911 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:11.911 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:11.911 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:11.911 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:11.911 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:11.911 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:11.911 21:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.820 21:24:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:13.820 21:24:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:13.820 21:24:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:13.820 21:24:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:35:14.079 21:24:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:14.079 21:24:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:14.079 21:24:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:14.079 21:24:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:14.079 21:24:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:14.079 21:24:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:14.079 21:24:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:15.456 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:15.456 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:15.456 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:15.456 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:15.456 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:15.456 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:15.456 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:15.456 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:15.456 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:15.456 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:15.456 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:15.456 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:15.456 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:15.456 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:15.456 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:15.456 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:16.393 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:16.393 00:35:16.393 real 0m9.955s 00:35:16.393 user 0m2.320s 00:35:16.393 sys 0m3.634s 00:35:16.393 21:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:16.393 21:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:16.393 ************************************ 00:35:16.393 END TEST nvmf_identify_kernel_target 00:35:16.393 ************************************ 00:35:16.393 21:24:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:16.393 21:24:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:16.393 21:24:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:16.393 21:24:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.393 ************************************ 00:35:16.393 START TEST nvmf_auth_host 00:35:16.393 ************************************ 00:35:16.393 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:16.652 * Looking for test storage... 00:35:16.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:16.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.652 --rc genhtml_branch_coverage=1 00:35:16.652 --rc genhtml_function_coverage=1 00:35:16.652 --rc genhtml_legend=1 00:35:16.652 --rc geninfo_all_blocks=1 00:35:16.652 --rc geninfo_unexecuted_blocks=1 00:35:16.652 00:35:16.652 ' 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:16.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.652 --rc genhtml_branch_coverage=1 00:35:16.652 --rc genhtml_function_coverage=1 00:35:16.652 --rc genhtml_legend=1 00:35:16.652 --rc geninfo_all_blocks=1 00:35:16.652 --rc geninfo_unexecuted_blocks=1 00:35:16.652 00:35:16.652 ' 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:16.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.652 --rc genhtml_branch_coverage=1 00:35:16.652 --rc genhtml_function_coverage=1 00:35:16.652 --rc genhtml_legend=1 00:35:16.652 --rc geninfo_all_blocks=1 00:35:16.652 --rc geninfo_unexecuted_blocks=1 00:35:16.652 00:35:16.652 ' 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:16.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.652 --rc genhtml_branch_coverage=1 00:35:16.652 --rc genhtml_function_coverage=1 00:35:16.652 --rc genhtml_legend=1 00:35:16.652 --rc geninfo_all_blocks=1 00:35:16.652 --rc geninfo_unexecuted_blocks=1 00:35:16.652 00:35:16.652 ' 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:16.652 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:16.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:35:16.653 21:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:18.557 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:18.557 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:18.558 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:18.558 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:18.558 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:18.558 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:18.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:18.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:35:18.817 00:35:18.817 --- 10.0.0.2 ping statistics --- 00:35:18.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:18.817 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:18.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:18.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:35:18.817 00:35:18.817 --- 10.0.0.1 ping statistics --- 00:35:18.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:18.817 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3148291 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3148291 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3148291 ']' 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:18.817 21:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bf9f9ff503c9ad555cfed0d9ad102a54 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.nzP 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bf9f9ff503c9ad555cfed0d9ad102a54 0 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bf9f9ff503c9ad555cfed0d9ad102a54 0 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bf9f9ff503c9ad555cfed0d9ad102a54 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:19.752 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.nzP 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.nzP 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.nzP 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c0594d68460008f5088764f00cb59177b555b579a12b1439965bcb6ec5459b7b 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Ia6 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c0594d68460008f5088764f00cb59177b555b579a12b1439965bcb6ec5459b7b 3 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c0594d68460008f5088764f00cb59177b555b579a12b1439965bcb6ec5459b7b 3 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c0594d68460008f5088764f00cb59177b555b579a12b1439965bcb6ec5459b7b 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Ia6 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Ia6 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Ia6 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b92810bd3be285eee2edb879294222d18dc48a324f802cd4 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Q1a 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b92810bd3be285eee2edb879294222d18dc48a324f802cd4 0 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b92810bd3be285eee2edb879294222d18dc48a324f802cd4 0 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b92810bd3be285eee2edb879294222d18dc48a324f802cd4 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Q1a 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Q1a 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Q1a 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:20.011 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7b6ff13085c0b018f9daaba3f3002338686697dfa8a52466 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.9c8 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7b6ff13085c0b018f9daaba3f3002338686697dfa8a52466 2 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7b6ff13085c0b018f9daaba3f3002338686697dfa8a52466 2 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7b6ff13085c0b018f9daaba3f3002338686697dfa8a52466 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.9c8 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.9c8 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.9c8 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3578033edfca3fe211e71233a0fda58b 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ZPn 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3578033edfca3fe211e71233a0fda58b 1 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3578033edfca3fe211e71233a0fda58b 1 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3578033edfca3fe211e71233a0fda58b 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ZPn 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ZPn 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ZPn 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=991f04e24445d884a61b4c6eef5fef57 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.PYY 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 991f04e24445d884a61b4c6eef5fef57 1 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 991f04e24445d884a61b4c6eef5fef57 1 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=991f04e24445d884a61b4c6eef5fef57 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:20.012 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.PYY 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.PYY 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.PYY 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4032193931ce4bd8bdad76a168496ffed0786d58be2f5cc8 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.rIN 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4032193931ce4bd8bdad76a168496ffed0786d58be2f5cc8 2 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4032193931ce4bd8bdad76a168496ffed0786d58be2f5cc8 2 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4032193931ce4bd8bdad76a168496ffed0786d58be2f5cc8 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.rIN 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.rIN 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.rIN 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2cf1e615daa6d8abfd8a77023a9c0272 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.KJV 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2cf1e615daa6d8abfd8a77023a9c0272 0 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2cf1e615daa6d8abfd8a77023a9c0272 0 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2cf1e615daa6d8abfd8a77023a9c0272 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.KJV 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.KJV 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.KJV 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=12233eb29b2543f5848320c4d65f40c2de29ca668e8f0973df115fb3de90e69e 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.JD9 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 12233eb29b2543f5848320c4d65f40c2de29ca668e8f0973df115fb3de90e69e 3 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 12233eb29b2543f5848320c4d65f40c2de29ca668e8f0973df115fb3de90e69e 3 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=12233eb29b2543f5848320c4d65f40c2de29ca668e8f0973df115fb3de90e69e 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.JD9 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.JD9 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.JD9 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3148291 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3148291 ']' 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:20.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:20.310 21:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nzP 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Ia6 ]] 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ia6 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Q1a 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.9c8 ]] 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9c8 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ZPn 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.PYY ]] 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.PYY 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.rIN 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.KJV ]] 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.KJV 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.JD9 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:20.596 21:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:21.971 Waiting for block devices as requested 00:35:21.971 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:21.971 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:22.230 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:22.230 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:22.230 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:22.489 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:22.489 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:22.489 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:22.489 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:22.489 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:22.747 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:22.747 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:22.747 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:23.005 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:23.005 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:23.005 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:23.005 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:23.573 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:23.573 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:23.573 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:23.574 No valid GPT data, bailing 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:23.574 00:35:23.574 Discovery Log Number of Records 2, Generation counter 2 00:35:23.574 =====Discovery Log Entry 0====== 00:35:23.574 trtype: tcp 00:35:23.574 adrfam: ipv4 00:35:23.574 subtype: current discovery subsystem 00:35:23.574 treq: not specified, sq flow control disable supported 00:35:23.574 portid: 1 00:35:23.574 trsvcid: 4420 00:35:23.574 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:23.574 traddr: 10.0.0.1 00:35:23.574 eflags: none 00:35:23.574 sectype: none 00:35:23.574 =====Discovery Log Entry 1====== 00:35:23.574 trtype: tcp 00:35:23.574 adrfam: ipv4 00:35:23.574 subtype: nvme subsystem 00:35:23.574 treq: not specified, sq flow control disable supported 00:35:23.574 portid: 1 00:35:23.574 trsvcid: 4420 00:35:23.574 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:23.574 traddr: 10.0.0.1 00:35:23.574 eflags: none 00:35:23.574 sectype: none 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: ]] 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:23.574 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:23.575 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.575 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.575 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:23.575 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.575 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:23.575 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:23.575 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:23.575 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:23.575 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.575 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.834 nvme0n1 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: ]] 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.834 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.093 nvme0n1 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: ]] 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:24.093 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.094 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.352 nvme0n1 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: ]] 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.353 21:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.612 nvme0n1 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: ]] 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.612 nvme0n1 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.612 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.871 nvme0n1 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.871 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.872 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.872 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.872 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.872 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.872 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.872 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.872 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: ]] 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.130 nvme0n1 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.130 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.388 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: ]] 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.389 21:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.389 nvme0n1 00:35:25.389 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.389 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.389 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.389 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.389 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.389 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: ]] 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:25.645 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.646 nvme0n1 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.646 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: ]] 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.904 nvme0n1 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.904 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.163 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.422 nvme0n1 00:35:26.422 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.422 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.422 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.422 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.422 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.422 21:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: ]] 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.422 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.680 nvme0n1 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:26.680 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: ]] 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.681 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.939 nvme0n1 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: ]] 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.939 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.197 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.197 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.197 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:27.197 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:27.197 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:27.197 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.197 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.197 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:27.197 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.197 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:27.197 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:27.197 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:27.197 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:27.197 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.197 21:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.455 nvme0n1 00:35:27.455 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.455 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.455 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.455 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.455 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.455 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.455 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.455 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.455 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.455 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.455 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.455 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.455 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:27.455 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.455 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:27.455 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:27.455 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:27.455 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: ]] 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.456 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.714 nvme0n1 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:27.714 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.715 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.715 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:27.715 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.715 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:27.715 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:27.715 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:27.715 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:27.715 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.715 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.974 nvme0n1 00:35:27.974 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.974 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.974 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.974 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.974 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: ]] 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.232 21:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.798 nvme0n1 00:35:28.798 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.798 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.798 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.798 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.798 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.798 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.798 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.798 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.798 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.798 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: ]] 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.799 21:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.365 nvme0n1 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: ]] 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.365 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.930 nvme0n1 00:35:29.930 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.930 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.930 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.930 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.930 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.930 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.930 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.930 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.930 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.930 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.930 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: ]] 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.931 21:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.497 nvme0n1 00:35:30.497 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.497 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.497 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.497 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.497 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.497 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.756 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.323 nvme0n1 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: ]] 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.323 21:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.257 nvme0n1 00:35:32.257 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.257 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.257 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.257 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.257 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.257 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.257 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.257 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.257 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: ]] 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.258 21:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.192 nvme0n1 00:35:33.192 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.192 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.192 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.192 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.192 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.192 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.192 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.192 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.192 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.192 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: ]] 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.450 21:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.450 21:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.450 21:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.450 21:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:33.451 21:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:33.451 21:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:33.451 21:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.451 21:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.451 21:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:33.451 21:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.451 21:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:33.451 21:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:33.451 21:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:33.451 21:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:33.451 21:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.451 21:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.385 nvme0n1 00:35:34.385 21:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.385 21:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.385 21:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.385 21:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.385 21:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.385 21:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: ]] 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.385 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.320 nvme0n1 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.320 21:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.320 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.320 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.320 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:35.320 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:35.320 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:35.320 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.320 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.320 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:35.320 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.320 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:35.320 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:35.320 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:35.320 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:35.320 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.320 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.252 nvme0n1 00:35:36.252 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.252 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.252 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.252 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.252 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.252 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.252 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.252 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.252 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.252 21:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: ]] 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.253 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.511 nvme0n1 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: ]] 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.511 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.769 nvme0n1 00:35:36.769 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.769 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.769 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.769 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.769 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.769 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.769 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.769 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.769 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.769 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.769 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: ]] 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.770 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.028 nvme0n1 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: ]] 00:35:37.028 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.029 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.287 nvme0n1 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.287 21:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.545 nvme0n1 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: ]] 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:37.545 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:37.546 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:37.546 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.546 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:37.546 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.546 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.546 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.546 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.546 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:37.546 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:37.546 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:37.546 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.546 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.546 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:37.546 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.546 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:37.546 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:37.546 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:37.546 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:37.546 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.546 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.803 nvme0n1 00:35:37.803 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: ]] 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.804 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.062 nvme0n1 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: ]] 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.062 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:38.063 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.063 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:38.063 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:38.063 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:38.063 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:38.063 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.063 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.322 nvme0n1 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: ]] 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.322 21:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.581 nvme0n1 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.581 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.839 nvme0n1 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: ]] 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:38.839 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.840 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.840 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:38.840 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.840 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:38.840 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:38.840 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:38.840 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:38.840 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.840 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.098 nvme0n1 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: ]] 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:39.098 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.099 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.099 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.099 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.099 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:39.099 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:39.099 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:39.099 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.099 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.099 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:39.099 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.099 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:39.099 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:39.099 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:39.099 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:39.099 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.099 21:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.665 nvme0n1 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: ]] 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.665 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.924 nvme0n1 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: ]] 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.924 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.183 nvme0n1 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.183 21:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.749 nvme0n1 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: ]] 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:40.749 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.750 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.750 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.750 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.750 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:40.750 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:40.750 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:40.750 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.750 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.750 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:40.750 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.750 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:40.750 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:40.750 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:40.750 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:40.750 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.750 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.317 nvme0n1 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: ]] 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.317 21:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.884 nvme0n1 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: ]] 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.884 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.885 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.885 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:41.885 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:41.885 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:41.885 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.885 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.885 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:41.885 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.885 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:41.885 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:41.885 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:41.885 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:41.885 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.885 21:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.451 nvme0n1 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: ]] 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.451 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.016 nvme0n1 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.016 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:43.017 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:43.017 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:43.017 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:43.017 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.017 21:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.623 nvme0n1 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: ]] 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.623 21:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.554 nvme0n1 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: ]] 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:44.554 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.555 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:44.555 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.555 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.555 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.555 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.555 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.555 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.555 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.555 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.555 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.555 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.555 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.555 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.555 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.555 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.555 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:44.555 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.555 21:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.928 nvme0n1 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: ]] 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.928 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:45.929 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:45.929 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:45.929 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:45.929 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.929 21:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.495 nvme0n1 00:35:46.495 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.495 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.495 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.495 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.495 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.495 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.495 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: ]] 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.754 21:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.689 nvme0n1 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.689 21:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.623 nvme0n1 00:35:48.623 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.623 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.623 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.623 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.623 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.623 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.623 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.623 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: ]] 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.624 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.882 nvme0n1 00:35:48.882 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.882 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.882 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.882 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.882 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.882 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.882 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.882 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.882 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.882 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.882 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.882 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.882 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:48.882 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.882 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.882 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:48.882 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: ]] 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.883 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.141 nvme0n1 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: ]] 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.141 nvme0n1 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.141 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: ]] 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:49.433 21:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:49.433 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.433 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.433 nvme0n1 00:35:49.433 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.433 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.433 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.433 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.433 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.433 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.433 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.433 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.433 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.433 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.720 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.720 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.720 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:49.720 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.720 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:49.720 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:49.720 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:49.720 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:49.720 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:49.720 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:49.720 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:49.720 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:49.720 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:49.720 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:49.720 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.720 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:49.720 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:49.720 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.721 nvme0n1 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: ]] 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.721 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.981 nvme0n1 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: ]] 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.981 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.240 nvme0n1 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: ]] 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.240 21:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.240 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:50.240 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.240 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:50.240 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:50.240 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:50.240 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:50.240 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.240 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.499 nvme0n1 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: ]] 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.499 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.759 nvme0n1 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.759 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.019 nvme0n1 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: ]] 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.019 21:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.587 nvme0n1 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: ]] 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.587 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.846 nvme0n1 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: ]] 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.846 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.847 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.847 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:51.847 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:51.847 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:51.847 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.847 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.847 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:51.847 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.847 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:51.847 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:51.847 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:51.847 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:51.847 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.847 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.105 nvme0n1 00:35:52.105 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.105 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.105 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.105 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.105 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.105 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.105 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.105 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.105 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: ]] 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.106 21:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.673 nvme0n1 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.673 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.932 nvme0n1 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: ]] 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.932 21:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.498 nvme0n1 00:35:53.498 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.498 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.498 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.498 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.498 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: ]] 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.499 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.066 nvme0n1 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: ]] 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.066 21:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.633 nvme0n1 00:35:54.634 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.634 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.634 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.634 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.634 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.634 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.634 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.634 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.634 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.634 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: ]] 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.892 21:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.458 nvme0n1 00:35:55.458 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.458 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.458 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.458 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.458 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.458 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.458 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.458 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.458 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.458 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.458 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.458 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.458 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:55.458 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.458 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.458 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:55.458 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:55.458 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:55.458 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:55.458 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.458 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.459 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.025 nvme0n1 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY5ZjlmZjUwM2M5YWQ1NTVjZmVkMGQ5YWQxMDJhNTRZhzOl: 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: ]] 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA1OTRkNjg0NjAwMDhmNTA4ODc2NGYwMGNiNTkxNzdiNTU1YjU3OWExMmIxNDM5OTY1YmNiNmVjNTQ1OWI3Yt/5lW4=: 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.025 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.026 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.026 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.026 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:56.026 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.026 21:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.960 nvme0n1 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: ]] 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:56.960 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.961 21:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.895 nvme0n1 00:35:57.895 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.895 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.895 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.895 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.895 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.895 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.895 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.895 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.895 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.895 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.895 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.895 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.895 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:57.895 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.895 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.895 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: ]] 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.153 21:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.088 nvme0n1 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAzMjE5MzkzMWNlNGJkOGJkYWQ3NmExNjg0OTZmZmVkMDc4NmQ1OGJlMmY1Y2M4OC6BmQ==: 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: ]] 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmNmMWU2MTVkYWE2ZDhhYmZkOGE3NzAyM2E5YzAyNzKPv7U8: 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.088 21:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.022 nvme0n1 00:36:00.022 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.022 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.022 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.022 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.022 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.022 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.022 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.022 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.022 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.022 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.022 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.022 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.022 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:00.022 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.022 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.022 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:00.022 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTIyMzNlYjI5YjI1NDNmNTg0ODMyMGM0ZDY1ZjQwYzJkZTI5Y2E2NjhlOGYwOTczZGYxMTVmYjNkZTkwZTY5ZdnXDb0=: 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.023 21:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.957 nvme0n1 00:36:00.957 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.957 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.957 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.957 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.957 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.957 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.957 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: ]] 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.958 request: 00:36:00.958 { 00:36:00.958 "name": "nvme0", 00:36:00.958 "trtype": "tcp", 00:36:00.958 "traddr": "10.0.0.1", 00:36:00.958 "adrfam": "ipv4", 00:36:00.958 "trsvcid": "4420", 00:36:00.958 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:00.958 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:00.958 "prchk_reftag": false, 00:36:00.958 "prchk_guard": false, 00:36:00.958 "hdgst": false, 00:36:00.958 "ddgst": false, 00:36:00.958 "allow_unrecognized_csi": false, 00:36:00.958 "method": "bdev_nvme_attach_controller", 00:36:00.958 "req_id": 1 00:36:00.958 } 00:36:00.958 Got JSON-RPC error response 00:36:00.958 response: 00:36:00.958 { 00:36:00.958 "code": -5, 00:36:00.958 "message": "Input/output error" 00:36:00.958 } 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.958 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:01.216 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.216 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:01.216 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:01.216 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:01.216 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:01.216 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:01.216 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.216 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.216 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:01.216 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.216 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:01.216 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:01.216 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.217 request: 00:36:01.217 { 00:36:01.217 "name": "nvme0", 00:36:01.217 "trtype": "tcp", 00:36:01.217 "traddr": "10.0.0.1", 00:36:01.217 "adrfam": "ipv4", 00:36:01.217 "trsvcid": "4420", 00:36:01.217 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:01.217 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:01.217 "prchk_reftag": false, 00:36:01.217 "prchk_guard": false, 00:36:01.217 "hdgst": false, 00:36:01.217 "ddgst": false, 00:36:01.217 "dhchap_key": "key2", 00:36:01.217 "allow_unrecognized_csi": false, 00:36:01.217 "method": "bdev_nvme_attach_controller", 00:36:01.217 "req_id": 1 00:36:01.217 } 00:36:01.217 Got JSON-RPC error response 00:36:01.217 response: 00:36:01.217 { 00:36:01.217 "code": -5, 00:36:01.217 "message": "Input/output error" 00:36:01.217 } 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.217 21:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.217 request: 00:36:01.217 { 00:36:01.217 "name": "nvme0", 00:36:01.217 "trtype": "tcp", 00:36:01.217 "traddr": "10.0.0.1", 00:36:01.217 "adrfam": "ipv4", 00:36:01.217 "trsvcid": "4420", 00:36:01.217 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:01.217 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:01.217 "prchk_reftag": false, 00:36:01.217 "prchk_guard": false, 00:36:01.217 "hdgst": false, 00:36:01.217 "ddgst": false, 00:36:01.217 "dhchap_key": "key1", 00:36:01.217 "dhchap_ctrlr_key": "ckey2", 00:36:01.217 "allow_unrecognized_csi": false, 00:36:01.217 "method": "bdev_nvme_attach_controller", 00:36:01.217 "req_id": 1 00:36:01.217 } 00:36:01.217 Got JSON-RPC error response 00:36:01.217 response: 00:36:01.217 { 00:36:01.217 "code": -5, 00:36:01.217 "message": "Input/output error" 00:36:01.217 } 00:36:01.217 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.475 nvme0n1 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:01.475 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:01.476 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:01.476 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:36:01.476 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:36:01.476 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:01.476 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:01.476 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:36:01.476 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: ]] 00:36:01.476 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:36:01.476 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:01.476 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.476 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.476 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.476 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.476 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.476 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:01.476 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.476 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.734 request: 00:36:01.734 { 00:36:01.734 "name": "nvme0", 00:36:01.734 "dhchap_key": "key1", 00:36:01.734 "dhchap_ctrlr_key": "ckey2", 00:36:01.734 "method": "bdev_nvme_set_keys", 00:36:01.734 "req_id": 1 00:36:01.734 } 00:36:01.734 Got JSON-RPC error response 00:36:01.734 response: 00:36:01.734 { 00:36:01.734 "code": -13, 00:36:01.734 "message": "Permission denied" 00:36:01.734 } 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:01.734 21:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:02.668 21:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.668 21:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:02.668 21:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.668 21:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.668 21:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.668 21:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:02.668 21:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkyODEwYmQzYmUyODVlZWUyZWRiODc5Mjk0MjIyZDE4ZGM0OGEzMjRmODAyY2Q0mPrpYw==: 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: ]] 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2I2ZmYxMzA4NWMwYjAxOGY5ZGFhYmEzZjMwMDIzMzg2ODY2OTdkZmE4YTUyNDY2KxCHvQ==: 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.041 nvme0n1 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzU3ODAzM2VkZmNhM2ZlMjExZTcxMjMzYTBmZGE1OGLyQ5u+: 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: ]] 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTkxZjA0ZTI0NDQ1ZDg4NGE2MWI0YzZlZWY1ZmVmNTcXz3Yg: 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.041 request: 00:36:04.041 { 00:36:04.041 "name": "nvme0", 00:36:04.041 "dhchap_key": "key2", 00:36:04.041 "dhchap_ctrlr_key": "ckey1", 00:36:04.041 "method": "bdev_nvme_set_keys", 00:36:04.041 "req_id": 1 00:36:04.041 } 00:36:04.041 Got JSON-RPC error response 00:36:04.041 response: 00:36:04.041 { 00:36:04.041 "code": -13, 00:36:04.041 "message": "Permission denied" 00:36:04.041 } 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:04.041 21:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:04.976 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.976 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:04.976 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.976 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.976 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:05.235 rmmod nvme_tcp 00:36:05.235 rmmod nvme_fabrics 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3148291 ']' 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3148291 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3148291 ']' 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3148291 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3148291 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3148291' 00:36:05.235 killing process with pid 3148291 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3148291 00:36:05.235 21:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3148291 00:36:06.170 21:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:06.170 21:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:06.170 21:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:06.170 21:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:36:06.170 21:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:36:06.170 21:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:06.170 21:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:36:06.170 21:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:06.170 21:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:06.170 21:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:06.170 21:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:06.170 21:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:08.703 21:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:08.703 21:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:08.703 21:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:08.703 21:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:08.703 21:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:08.703 21:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:36:08.703 21:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:08.703 21:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:08.703 21:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:08.703 21:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:08.703 21:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:08.703 21:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:08.703 21:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:09.270 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:09.270 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:09.270 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:09.270 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:09.270 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:09.270 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:09.270 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:09.529 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:09.529 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:09.529 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:09.529 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:09.529 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:09.529 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:09.529 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:09.529 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:09.529 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:10.467 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:10.467 21:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.nzP /tmp/spdk.key-null.Q1a /tmp/spdk.key-sha256.ZPn /tmp/spdk.key-sha384.rIN /tmp/spdk.key-sha512.JD9 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:10.467 21:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:11.402 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:11.402 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:11.402 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:11.402 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:11.402 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:11.402 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:11.402 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:11.402 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:11.402 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:11.661 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:11.661 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:11.661 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:11.661 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:11.661 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:11.661 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:11.661 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:11.661 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:11.661 00:36:11.661 real 0m55.238s 00:36:11.661 user 0m53.153s 00:36:11.661 sys 0m6.366s 00:36:11.661 21:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:11.661 21:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.661 ************************************ 00:36:11.661 END TEST nvmf_auth_host 00:36:11.661 ************************************ 00:36:11.661 21:25:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:36:11.661 21:25:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:11.661 21:25:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:11.661 21:25:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:11.661 21:25:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.661 ************************************ 00:36:11.661 START TEST nvmf_digest 00:36:11.661 ************************************ 00:36:11.661 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:11.921 * Looking for test storage... 00:36:11.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:11.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.921 --rc genhtml_branch_coverage=1 00:36:11.921 --rc genhtml_function_coverage=1 00:36:11.921 --rc genhtml_legend=1 00:36:11.921 --rc geninfo_all_blocks=1 00:36:11.921 --rc geninfo_unexecuted_blocks=1 00:36:11.921 00:36:11.921 ' 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:11.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.921 --rc genhtml_branch_coverage=1 00:36:11.921 --rc genhtml_function_coverage=1 00:36:11.921 --rc genhtml_legend=1 00:36:11.921 --rc geninfo_all_blocks=1 00:36:11.921 --rc geninfo_unexecuted_blocks=1 00:36:11.921 00:36:11.921 ' 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:11.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.921 --rc genhtml_branch_coverage=1 00:36:11.921 --rc genhtml_function_coverage=1 00:36:11.921 --rc genhtml_legend=1 00:36:11.921 --rc geninfo_all_blocks=1 00:36:11.921 --rc geninfo_unexecuted_blocks=1 00:36:11.921 00:36:11.921 ' 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:11.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.921 --rc genhtml_branch_coverage=1 00:36:11.921 --rc genhtml_function_coverage=1 00:36:11.921 --rc genhtml_legend=1 00:36:11.921 --rc geninfo_all_blocks=1 00:36:11.921 --rc geninfo_unexecuted_blocks=1 00:36:11.921 00:36:11.921 ' 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:11.921 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:11.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:36:11.922 21:25:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:13.824 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:13.824 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:13.824 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:13.824 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:13.824 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:13.825 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:13.825 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:13.825 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:13.825 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:13.825 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:13.825 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:13.825 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:13.825 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:13.825 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:13.825 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:13.825 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:13.825 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:13.825 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:13.825 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:13.825 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:13.825 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:14.084 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:14.084 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:14.084 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:14.084 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:14.084 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:14.084 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:14.084 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:14.084 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:14.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:14.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:36:14.084 00:36:14.084 --- 10.0.0.2 ping statistics --- 00:36:14.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.085 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:14.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:14.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:36:14.085 00:36:14.085 --- 10.0.0.1 ping statistics --- 00:36:14.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:14.085 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:14.085 ************************************ 00:36:14.085 START TEST nvmf_digest_clean 00:36:14.085 ************************************ 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3158415 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3158415 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3158415 ']' 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:14.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:14.085 21:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:14.344 [2024-11-19 21:25:47.904088] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:36:14.344 [2024-11-19 21:25:47.904246] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:14.344 [2024-11-19 21:25:48.059715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:14.602 [2024-11-19 21:25:48.200154] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:14.602 [2024-11-19 21:25:48.200252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:14.602 [2024-11-19 21:25:48.200279] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:14.602 [2024-11-19 21:25:48.200305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:14.602 [2024-11-19 21:25:48.200324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:14.602 [2024-11-19 21:25:48.202008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:15.168 21:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:15.168 21:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:15.168 21:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:15.168 21:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:15.168 21:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:15.168 21:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:15.168 21:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:15.168 21:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:15.168 21:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:15.168 21:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.168 21:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:15.734 null0 00:36:15.734 [2024-11-19 21:25:49.275017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:15.734 [2024-11-19 21:25:49.299345] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:15.734 21:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.734 21:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:15.734 21:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:15.734 21:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:15.734 21:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:15.734 21:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:15.734 21:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:15.734 21:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:15.734 21:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3158574 00:36:15.734 21:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:15.734 21:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3158574 /var/tmp/bperf.sock 00:36:15.734 21:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3158574 ']' 00:36:15.734 21:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:15.734 21:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:15.734 21:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:15.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:15.734 21:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:15.734 21:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:15.734 [2024-11-19 21:25:49.393723] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:36:15.734 [2024-11-19 21:25:49.393887] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3158574 ] 00:36:15.992 [2024-11-19 21:25:49.550729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:15.992 [2024-11-19 21:25:49.691828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:16.558 21:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:16.558 21:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:16.558 21:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:16.559 21:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:16.559 21:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:17.493 21:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:17.493 21:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:17.751 nvme0n1 00:36:17.751 21:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:17.751 21:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:17.751 Running I/O for 2 seconds... 00:36:20.057 13752.00 IOPS, 53.72 MiB/s [2024-11-19T20:25:53.852Z] 13481.50 IOPS, 52.66 MiB/s 00:36:20.057 Latency(us) 00:36:20.057 [2024-11-19T20:25:53.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:20.057 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:20.057 nvme0n1 : 2.01 13502.69 52.74 0.00 0.00 9466.48 4684.61 20194.80 00:36:20.057 [2024-11-19T20:25:53.852Z] =================================================================================================================== 00:36:20.057 [2024-11-19T20:25:53.852Z] Total : 13502.69 52.74 0.00 0.00 9466.48 4684.61 20194.80 00:36:20.057 { 00:36:20.057 "results": [ 00:36:20.057 { 00:36:20.057 "job": "nvme0n1", 00:36:20.057 "core_mask": "0x2", 00:36:20.057 "workload": "randread", 00:36:20.057 "status": "finished", 00:36:20.057 "queue_depth": 128, 00:36:20.057 "io_size": 4096, 00:36:20.057 "runtime": 2.006341, 00:36:20.057 "iops": 13502.689722235651, 00:36:20.057 "mibps": 52.74488172748301, 00:36:20.057 "io_failed": 0, 00:36:20.057 "io_timeout": 0, 00:36:20.057 "avg_latency_us": 9466.47596870356, 00:36:20.057 "min_latency_us": 4684.61037037037, 00:36:20.057 "max_latency_us": 20194.79703703704 00:36:20.057 } 00:36:20.057 ], 00:36:20.057 "core_count": 1 00:36:20.057 } 00:36:20.057 21:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:20.057 21:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:20.057 21:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:20.057 21:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:20.057 21:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:20.057 | select(.opcode=="crc32c") 00:36:20.057 | "\(.module_name) \(.executed)"' 00:36:20.057 21:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:20.057 21:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:20.057 21:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:20.057 21:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:20.057 21:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3158574 00:36:20.057 21:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3158574 ']' 00:36:20.057 21:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3158574 00:36:20.057 21:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:20.057 21:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:20.057 21:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3158574 00:36:20.057 21:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:20.057 21:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:20.057 21:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3158574' 00:36:20.057 killing process with pid 3158574 00:36:20.057 21:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3158574 00:36:20.057 Received shutdown signal, test time was about 2.000000 seconds 00:36:20.057 00:36:20.057 Latency(us) 00:36:20.057 [2024-11-19T20:25:53.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:20.057 [2024-11-19T20:25:53.852Z] =================================================================================================================== 00:36:20.057 [2024-11-19T20:25:53.852Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:20.057 21:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3158574 00:36:20.991 21:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:20.991 21:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:20.991 21:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:20.991 21:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:20.991 21:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:20.991 21:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:20.991 21:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:20.991 21:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3159234 00:36:20.991 21:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:20.991 21:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3159234 /var/tmp/bperf.sock 00:36:20.991 21:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3159234 ']' 00:36:20.991 21:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:20.991 21:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:20.991 21:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:20.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:20.991 21:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:20.991 21:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:20.991 [2024-11-19 21:25:54.769046] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:36:20.991 [2024-11-19 21:25:54.769185] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159234 ] 00:36:20.991 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:20.991 Zero copy mechanism will not be used. 00:36:21.250 [2024-11-19 21:25:54.909452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:21.508 [2024-11-19 21:25:55.046063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:22.126 21:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:22.126 21:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:22.126 21:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:22.126 21:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:22.126 21:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:22.693 21:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:22.693 21:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:22.950 nvme0n1 00:36:22.950 21:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:22.950 21:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:23.208 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:23.208 Zero copy mechanism will not be used. 00:36:23.208 Running I/O for 2 seconds... 00:36:25.076 4801.00 IOPS, 600.12 MiB/s [2024-11-19T20:25:58.871Z] 4903.00 IOPS, 612.88 MiB/s 00:36:25.076 Latency(us) 00:36:25.076 [2024-11-19T20:25:58.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:25.076 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:25.076 nvme0n1 : 2.00 4901.77 612.72 0.00 0.00 3257.67 1086.20 5631.24 00:36:25.076 [2024-11-19T20:25:58.871Z] =================================================================================================================== 00:36:25.076 [2024-11-19T20:25:58.871Z] Total : 4901.77 612.72 0.00 0.00 3257.67 1086.20 5631.24 00:36:25.076 { 00:36:25.076 "results": [ 00:36:25.076 { 00:36:25.076 "job": "nvme0n1", 00:36:25.076 "core_mask": "0x2", 00:36:25.076 "workload": "randread", 00:36:25.076 "status": "finished", 00:36:25.076 "queue_depth": 16, 00:36:25.076 "io_size": 131072, 00:36:25.076 "runtime": 2.003766, 00:36:25.076 "iops": 4901.769967151853, 00:36:25.076 "mibps": 612.7212458939816, 00:36:25.076 "io_failed": 0, 00:36:25.076 "io_timeout": 0, 00:36:25.076 "avg_latency_us": 3257.666057603113, 00:36:25.076 "min_latency_us": 1086.1985185185185, 00:36:25.076 "max_latency_us": 5631.241481481481 00:36:25.076 } 00:36:25.076 ], 00:36:25.076 "core_count": 1 00:36:25.076 } 00:36:25.076 21:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:25.076 21:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:25.076 21:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:25.076 21:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:25.076 | select(.opcode=="crc32c") 00:36:25.076 | "\(.module_name) \(.executed)"' 00:36:25.076 21:25:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:25.641 21:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:25.642 21:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:25.642 21:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:25.642 21:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:25.642 21:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3159234 00:36:25.642 21:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3159234 ']' 00:36:25.642 21:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3159234 00:36:25.642 21:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:25.642 21:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:25.642 21:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3159234 00:36:25.642 21:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:25.642 21:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:25.642 21:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3159234' 00:36:25.642 killing process with pid 3159234 00:36:25.642 21:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3159234 00:36:25.642 Received shutdown signal, test time was about 2.000000 seconds 00:36:25.642 00:36:25.642 Latency(us) 00:36:25.642 [2024-11-19T20:25:59.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:25.642 [2024-11-19T20:25:59.437Z] =================================================================================================================== 00:36:25.642 [2024-11-19T20:25:59.437Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:25.642 21:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3159234 00:36:26.576 21:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:26.576 21:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:26.576 21:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:26.576 21:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:26.576 21:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:26.576 21:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:26.576 21:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:26.576 21:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3159835 00:36:26.576 21:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:26.576 21:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3159835 /var/tmp/bperf.sock 00:36:26.576 21:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3159835 ']' 00:36:26.576 21:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:26.576 21:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:26.576 21:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:26.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:26.577 21:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:26.577 21:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:26.577 [2024-11-19 21:26:00.185732] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:36:26.577 [2024-11-19 21:26:00.185861] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159835 ] 00:36:26.577 [2024-11-19 21:26:00.329267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:26.835 [2024-11-19 21:26:00.458192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:27.401 21:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:27.401 21:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:27.401 21:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:27.401 21:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:27.402 21:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:28.336 21:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:28.336 21:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:28.594 nvme0n1 00:36:28.594 21:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:28.594 21:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:28.853 Running I/O for 2 seconds... 00:36:30.721 14833.00 IOPS, 57.94 MiB/s [2024-11-19T20:26:04.516Z] 14672.50 IOPS, 57.31 MiB/s 00:36:30.721 Latency(us) 00:36:30.721 [2024-11-19T20:26:04.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.721 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:30.721 nvme0n1 : 2.01 14673.13 57.32 0.00 0.00 8698.21 3980.71 18738.44 00:36:30.721 [2024-11-19T20:26:04.516Z] =================================================================================================================== 00:36:30.721 [2024-11-19T20:26:04.516Z] Total : 14673.13 57.32 0.00 0.00 8698.21 3980.71 18738.44 00:36:30.721 { 00:36:30.721 "results": [ 00:36:30.721 { 00:36:30.721 "job": "nvme0n1", 00:36:30.721 "core_mask": "0x2", 00:36:30.721 "workload": "randwrite", 00:36:30.721 "status": "finished", 00:36:30.721 "queue_depth": 128, 00:36:30.721 "io_size": 4096, 00:36:30.721 "runtime": 2.010818, 00:36:30.721 "iops": 14673.133023476019, 00:36:30.721 "mibps": 57.3169258729532, 00:36:30.721 "io_failed": 0, 00:36:30.721 "io_timeout": 0, 00:36:30.721 "avg_latency_us": 8698.212096229767, 00:36:30.721 "min_latency_us": 3980.705185185185, 00:36:30.721 "max_latency_us": 18738.44148148148 00:36:30.721 } 00:36:30.721 ], 00:36:30.721 "core_count": 1 00:36:30.721 } 00:36:30.721 21:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:30.721 21:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:30.721 21:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:30.721 21:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:30.721 | select(.opcode=="crc32c") 00:36:30.721 | "\(.module_name) \(.executed)"' 00:36:30.721 21:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:31.287 21:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:31.287 21:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:31.287 21:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:31.287 21:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:31.287 21:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3159835 00:36:31.287 21:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3159835 ']' 00:36:31.287 21:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3159835 00:36:31.287 21:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:31.287 21:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:31.287 21:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3159835 00:36:31.287 21:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:31.287 21:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:31.287 21:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3159835' 00:36:31.287 killing process with pid 3159835 00:36:31.287 21:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3159835 00:36:31.287 Received shutdown signal, test time was about 2.000000 seconds 00:36:31.287 00:36:31.287 Latency(us) 00:36:31.287 [2024-11-19T20:26:05.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:31.287 [2024-11-19T20:26:05.082Z] =================================================================================================================== 00:36:31.287 [2024-11-19T20:26:05.082Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:31.287 21:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3159835 00:36:32.222 21:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:32.222 21:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:32.222 21:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:32.222 21:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:32.222 21:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:32.222 21:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:32.222 21:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:32.222 21:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3160451 00:36:32.222 21:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:32.222 21:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3160451 /var/tmp/bperf.sock 00:36:32.222 21:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3160451 ']' 00:36:32.222 21:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:32.222 21:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:32.222 21:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:32.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:32.222 21:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:32.222 21:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:32.222 [2024-11-19 21:26:05.801170] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:36:32.222 [2024-11-19 21:26:05.801301] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3160451 ] 00:36:32.222 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:32.222 Zero copy mechanism will not be used. 00:36:32.222 [2024-11-19 21:26:05.953459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:32.480 [2024-11-19 21:26:06.089302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:33.047 21:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:33.047 21:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:33.047 21:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:33.047 21:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:33.047 21:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:33.980 21:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:33.980 21:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:34.238 nvme0n1 00:36:34.238 21:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:34.238 21:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:34.238 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:34.238 Zero copy mechanism will not be used. 00:36:34.238 Running I/O for 2 seconds... 00:36:36.549 5090.00 IOPS, 636.25 MiB/s [2024-11-19T20:26:10.344Z] 5305.50 IOPS, 663.19 MiB/s 00:36:36.549 Latency(us) 00:36:36.549 [2024-11-19T20:26:10.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:36.549 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:36.549 nvme0n1 : 2.00 5303.79 662.97 0.00 0.00 3007.30 2148.12 9563.40 00:36:36.549 [2024-11-19T20:26:10.344Z] =================================================================================================================== 00:36:36.549 [2024-11-19T20:26:10.344Z] Total : 5303.79 662.97 0.00 0.00 3007.30 2148.12 9563.40 00:36:36.549 { 00:36:36.549 "results": [ 00:36:36.549 { 00:36:36.549 "job": "nvme0n1", 00:36:36.549 "core_mask": "0x2", 00:36:36.549 "workload": "randwrite", 00:36:36.549 "status": "finished", 00:36:36.549 "queue_depth": 16, 00:36:36.549 "io_size": 131072, 00:36:36.549 "runtime": 2.004605, 00:36:36.549 "iops": 5303.78802806538, 00:36:36.549 "mibps": 662.9735035081725, 00:36:36.549 "io_failed": 0, 00:36:36.549 "io_timeout": 0, 00:36:36.549 "avg_latency_us": 3007.3004821224536, 00:36:36.549 "min_latency_us": 2148.1244444444446, 00:36:36.549 "max_latency_us": 9563.401481481482 00:36:36.549 } 00:36:36.549 ], 00:36:36.549 "core_count": 1 00:36:36.549 } 00:36:36.549 21:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:36.549 21:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:36.549 21:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:36.549 21:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:36.549 21:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:36.549 | select(.opcode=="crc32c") 00:36:36.549 | "\(.module_name) \(.executed)"' 00:36:36.549 21:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:36.549 21:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:36.549 21:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:36.549 21:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:36.549 21:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3160451 00:36:36.549 21:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3160451 ']' 00:36:36.549 21:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3160451 00:36:36.549 21:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:36.549 21:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:36.549 21:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3160451 00:36:36.549 21:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:36.549 21:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:36.549 21:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3160451' 00:36:36.549 killing process with pid 3160451 00:36:36.549 21:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3160451 00:36:36.549 Received shutdown signal, test time was about 2.000000 seconds 00:36:36.549 00:36:36.549 Latency(us) 00:36:36.549 [2024-11-19T20:26:10.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:36.549 [2024-11-19T20:26:10.344Z] =================================================================================================================== 00:36:36.549 [2024-11-19T20:26:10.344Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:36.549 21:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3160451 00:36:37.484 21:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3158415 00:36:37.484 21:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3158415 ']' 00:36:37.484 21:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3158415 00:36:37.484 21:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:37.484 21:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:37.484 21:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3158415 00:36:37.484 21:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:37.484 21:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:37.484 21:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3158415' 00:36:37.484 killing process with pid 3158415 00:36:37.484 21:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3158415 00:36:37.484 21:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3158415 00:36:38.860 00:36:38.860 real 0m24.559s 00:36:38.860 user 0m48.201s 00:36:38.860 sys 0m4.712s 00:36:38.860 21:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:38.860 21:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:38.860 ************************************ 00:36:38.860 END TEST nvmf_digest_clean 00:36:38.860 ************************************ 00:36:38.860 21:26:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:38.860 21:26:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:38.860 21:26:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:38.860 21:26:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:38.860 ************************************ 00:36:38.860 START TEST nvmf_digest_error 00:36:38.860 ************************************ 00:36:38.860 21:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:36:38.860 21:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:38.860 21:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:38.860 21:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:38.860 21:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:38.860 21:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3161277 00:36:38.860 21:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:38.860 21:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3161277 00:36:38.860 21:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3161277 ']' 00:36:38.860 21:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:38.860 21:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:38.860 21:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:38.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:38.860 21:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:38.860 21:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:38.860 [2024-11-19 21:26:12.503412] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:36:38.860 [2024-11-19 21:26:12.503566] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:39.119 [2024-11-19 21:26:12.678957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.119 [2024-11-19 21:26:12.819496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:39.119 [2024-11-19 21:26:12.819597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:39.119 [2024-11-19 21:26:12.819618] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:39.119 [2024-11-19 21:26:12.819638] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:39.119 [2024-11-19 21:26:12.819654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:39.119 [2024-11-19 21:26:12.821039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:40.053 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:40.053 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:40.053 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:40.053 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:40.053 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:40.053 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:40.054 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:40.054 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.054 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:40.054 [2024-11-19 21:26:13.567768] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:40.054 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.054 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:40.054 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:40.054 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.054 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:40.312 null0 00:36:40.312 [2024-11-19 21:26:13.952500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:40.312 [2024-11-19 21:26:13.976810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:40.312 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.312 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:40.312 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:40.312 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:40.312 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:40.312 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:40.312 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3161551 00:36:40.312 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:40.312 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3161551 /var/tmp/bperf.sock 00:36:40.312 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3161551 ']' 00:36:40.312 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:40.312 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:40.312 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:40.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:40.312 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:40.312 21:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:40.312 [2024-11-19 21:26:14.061273] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:36:40.312 [2024-11-19 21:26:14.061420] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3161551 ] 00:36:40.571 [2024-11-19 21:26:14.193096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:40.571 [2024-11-19 21:26:14.311802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.505 21:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:41.505 21:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:41.505 21:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:41.505 21:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:41.505 21:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:41.505 21:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.505 21:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:41.505 21:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.505 21:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:41.505 21:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:42.070 nvme0n1 00:36:42.070 21:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:42.070 21:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.070 21:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:42.070 21:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.070 21:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:42.070 21:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:42.070 Running I/O for 2 seconds... 00:36:42.329 [2024-11-19 21:26:15.885250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.329 [2024-11-19 21:26:15.885326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.329 [2024-11-19 21:26:15.885355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.329 [2024-11-19 21:26:15.908616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.329 [2024-11-19 21:26:15.908667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.329 [2024-11-19 21:26:15.908696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.329 [2024-11-19 21:26:15.926819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.329 [2024-11-19 21:26:15.926867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.329 [2024-11-19 21:26:15.926897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.329 [2024-11-19 21:26:15.945864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.329 [2024-11-19 21:26:15.945912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.329 [2024-11-19 21:26:15.945948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.329 [2024-11-19 21:26:15.962182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.329 [2024-11-19 21:26:15.962237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.329 [2024-11-19 21:26:15.962263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.329 [2024-11-19 21:26:15.984155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.329 [2024-11-19 21:26:15.984209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.329 [2024-11-19 21:26:15.984235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.329 [2024-11-19 21:26:16.004637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.329 [2024-11-19 21:26:16.004687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.329 [2024-11-19 21:26:16.004716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.329 [2024-11-19 21:26:16.023086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.329 [2024-11-19 21:26:16.023161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.329 [2024-11-19 21:26:16.023190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.329 [2024-11-19 21:26:16.045295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.329 [2024-11-19 21:26:16.045337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.329 [2024-11-19 21:26:16.045377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.329 [2024-11-19 21:26:16.064608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.329 [2024-11-19 21:26:16.064657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.329 [2024-11-19 21:26:16.064703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.329 [2024-11-19 21:26:16.085445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.329 [2024-11-19 21:26:16.085499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.329 [2024-11-19 21:26:16.085539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.329 [2024-11-19 21:26:16.102353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.329 [2024-11-19 21:26:16.102423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.329 [2024-11-19 21:26:16.102460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.329 [2024-11-19 21:26:16.120152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.329 [2024-11-19 21:26:16.120200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.329 [2024-11-19 21:26:16.120227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.588 [2024-11-19 21:26:16.139731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.588 [2024-11-19 21:26:16.139788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.588 [2024-11-19 21:26:16.139815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.588 [2024-11-19 21:26:16.160711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.588 [2024-11-19 21:26:16.160764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.588 [2024-11-19 21:26:16.160793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.588 [2024-11-19 21:26:16.181189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.588 [2024-11-19 21:26:16.181249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.588 [2024-11-19 21:26:16.181306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.588 [2024-11-19 21:26:16.197942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.588 [2024-11-19 21:26:16.197990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.588 [2024-11-19 21:26:16.198020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.588 [2024-11-19 21:26:16.218174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.588 [2024-11-19 21:26:16.218229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.588 [2024-11-19 21:26:16.218255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.588 [2024-11-19 21:26:16.239587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.588 [2024-11-19 21:26:16.239636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.588 [2024-11-19 21:26:16.239665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.588 [2024-11-19 21:26:16.262656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.588 [2024-11-19 21:26:16.262704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.588 [2024-11-19 21:26:16.262734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.588 [2024-11-19 21:26:16.280856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.588 [2024-11-19 21:26:16.280904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.588 [2024-11-19 21:26:16.280934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.588 [2024-11-19 21:26:16.298704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.588 [2024-11-19 21:26:16.298753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.589 [2024-11-19 21:26:16.298782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.589 [2024-11-19 21:26:16.316557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.589 [2024-11-19 21:26:16.316606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.589 [2024-11-19 21:26:16.316635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.589 [2024-11-19 21:26:16.334372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.589 [2024-11-19 21:26:16.334427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.589 [2024-11-19 21:26:16.334457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.589 [2024-11-19 21:26:16.355747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.589 [2024-11-19 21:26:16.355803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.589 [2024-11-19 21:26:16.355833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.589 [2024-11-19 21:26:16.376324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.589 [2024-11-19 21:26:16.376375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.589 [2024-11-19 21:26:16.376405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.847 [2024-11-19 21:26:16.398781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.847 [2024-11-19 21:26:16.398830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.847 [2024-11-19 21:26:16.398859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.847 [2024-11-19 21:26:16.417842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.847 [2024-11-19 21:26:16.417890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.847 [2024-11-19 21:26:16.417919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.847 [2024-11-19 21:26:16.433686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.847 [2024-11-19 21:26:16.433734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.847 [2024-11-19 21:26:16.433765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.847 [2024-11-19 21:26:16.456167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.847 [2024-11-19 21:26:16.456226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.847 [2024-11-19 21:26:16.456256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.847 [2024-11-19 21:26:16.477741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.847 [2024-11-19 21:26:16.477795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.847 [2024-11-19 21:26:16.477832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.847 [2024-11-19 21:26:16.492613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.847 [2024-11-19 21:26:16.492661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.847 [2024-11-19 21:26:16.492690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.847 [2024-11-19 21:26:16.515268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.847 [2024-11-19 21:26:16.515323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.847 [2024-11-19 21:26:16.515349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.847 [2024-11-19 21:26:16.537008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.847 [2024-11-19 21:26:16.537056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.847 [2024-11-19 21:26:16.537095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.847 [2024-11-19 21:26:16.553888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.847 [2024-11-19 21:26:16.553936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.847 [2024-11-19 21:26:16.553965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.847 [2024-11-19 21:26:16.574446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.847 [2024-11-19 21:26:16.574494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.847 [2024-11-19 21:26:16.574524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.847 [2024-11-19 21:26:16.595893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.847 [2024-11-19 21:26:16.595941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.847 [2024-11-19 21:26:16.595970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.847 [2024-11-19 21:26:16.616820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.848 [2024-11-19 21:26:16.616876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.848 [2024-11-19 21:26:16.616906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.848 [2024-11-19 21:26:16.632366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.848 [2024-11-19 21:26:16.632423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.848 [2024-11-19 21:26:16.632453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.106 [2024-11-19 21:26:16.650324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.106 [2024-11-19 21:26:16.650380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.106 [2024-11-19 21:26:16.650416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.106 [2024-11-19 21:26:16.669808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.106 [2024-11-19 21:26:16.669856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.106 [2024-11-19 21:26:16.669885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.106 [2024-11-19 21:26:16.686184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.106 [2024-11-19 21:26:16.686246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.106 [2024-11-19 21:26:16.686273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.106 [2024-11-19 21:26:16.706763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.106 [2024-11-19 21:26:16.706810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.106 [2024-11-19 21:26:16.706839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.106 [2024-11-19 21:26:16.729087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.106 [2024-11-19 21:26:16.729144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.106 [2024-11-19 21:26:16.729184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.106 [2024-11-19 21:26:16.753265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.106 [2024-11-19 21:26:16.753324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.106 [2024-11-19 21:26:16.753364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.106 [2024-11-19 21:26:16.770996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.106 [2024-11-19 21:26:16.771044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.106 [2024-11-19 21:26:16.771085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.106 [2024-11-19 21:26:16.786535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.106 [2024-11-19 21:26:16.786590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.106 [2024-11-19 21:26:16.786626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.106 [2024-11-19 21:26:16.806813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.106 [2024-11-19 21:26:16.806866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.106 [2024-11-19 21:26:16.806897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.106 [2024-11-19 21:26:16.829911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.106 [2024-11-19 21:26:16.829960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.106 [2024-11-19 21:26:16.829990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.106 [2024-11-19 21:26:16.846573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.106 [2024-11-19 21:26:16.846621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.106 [2024-11-19 21:26:16.846650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.106 12960.00 IOPS, 50.62 MiB/s [2024-11-19T20:26:16.901Z] [2024-11-19 21:26:16.865222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.107 [2024-11-19 21:26:16.865269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.107 [2024-11-19 21:26:16.865298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.107 [2024-11-19 21:26:16.887206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.107 [2024-11-19 21:26:16.887254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.107 [2024-11-19 21:26:16.887282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.365 [2024-11-19 21:26:16.903472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.366 [2024-11-19 21:26:16.903542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.366 [2024-11-19 21:26:16.903571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.366 [2024-11-19 21:26:16.924376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.366 [2024-11-19 21:26:16.924429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.366 [2024-11-19 21:26:16.924460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.366 [2024-11-19 21:26:16.943528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.366 [2024-11-19 21:26:16.943576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.366 [2024-11-19 21:26:16.943606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.366 [2024-11-19 21:26:16.962482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.366 [2024-11-19 21:26:16.962530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.366 [2024-11-19 21:26:16.962561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.366 [2024-11-19 21:26:16.980782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.366 [2024-11-19 21:26:16.980830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.366 [2024-11-19 21:26:16.980859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.366 [2024-11-19 21:26:16.998271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.366 [2024-11-19 21:26:16.998320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.366 [2024-11-19 21:26:16.998350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.366 [2024-11-19 21:26:17.015779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.366 [2024-11-19 21:26:17.015838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.366 [2024-11-19 21:26:17.015895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.366 [2024-11-19 21:26:17.037142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.366 [2024-11-19 21:26:17.037190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.366 [2024-11-19 21:26:17.037219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.366 [2024-11-19 21:26:17.055389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.366 [2024-11-19 21:26:17.055437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.366 [2024-11-19 21:26:17.055467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.366 [2024-11-19 21:26:17.072589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.366 [2024-11-19 21:26:17.072643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.366 [2024-11-19 21:26:17.072676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.366 [2024-11-19 21:26:17.090430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.366 [2024-11-19 21:26:17.090483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.366 [2024-11-19 21:26:17.090514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.366 [2024-11-19 21:26:17.108041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.366 [2024-11-19 21:26:17.108099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.366 [2024-11-19 21:26:17.108135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.366 [2024-11-19 21:26:17.125974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.366 [2024-11-19 21:26:17.126025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.366 [2024-11-19 21:26:17.126054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.366 [2024-11-19 21:26:17.143774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.366 [2024-11-19 21:26:17.143823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.366 [2024-11-19 21:26:17.143852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.625 [2024-11-19 21:26:17.161505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.625 [2024-11-19 21:26:17.161559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.625 [2024-11-19 21:26:17.161590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.625 [2024-11-19 21:26:17.179335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.625 [2024-11-19 21:26:17.179389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.625 [2024-11-19 21:26:17.179420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.625 [2024-11-19 21:26:17.197178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.625 [2024-11-19 21:26:17.197226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.625 [2024-11-19 21:26:17.197255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.625 [2024-11-19 21:26:17.216386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.625 [2024-11-19 21:26:17.216441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.625 [2024-11-19 21:26:17.216471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.625 [2024-11-19 21:26:17.234459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.625 [2024-11-19 21:26:17.234507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.625 [2024-11-19 21:26:17.234537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.625 [2024-11-19 21:26:17.252307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.625 [2024-11-19 21:26:17.252356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.625 [2024-11-19 21:26:17.252385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.625 [2024-11-19 21:26:17.270081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.625 [2024-11-19 21:26:17.270129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.625 [2024-11-19 21:26:17.270159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.625 [2024-11-19 21:26:17.288043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.625 [2024-11-19 21:26:17.288101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.625 [2024-11-19 21:26:17.288133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.625 [2024-11-19 21:26:17.306049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.625 [2024-11-19 21:26:17.306122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.625 [2024-11-19 21:26:17.306153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.625 [2024-11-19 21:26:17.323932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.625 [2024-11-19 21:26:17.323989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.625 [2024-11-19 21:26:17.324019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.625 [2024-11-19 21:26:17.341561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.625 [2024-11-19 21:26:17.341614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.625 [2024-11-19 21:26:17.341652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.625 [2024-11-19 21:26:17.359380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.625 [2024-11-19 21:26:17.359431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.625 [2024-11-19 21:26:17.359466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.625 [2024-11-19 21:26:17.377351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.625 [2024-11-19 21:26:17.377398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.625 [2024-11-19 21:26:17.377427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.625 [2024-11-19 21:26:17.395199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.625 [2024-11-19 21:26:17.395252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.625 [2024-11-19 21:26:17.395281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.625 [2024-11-19 21:26:17.413853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.625 [2024-11-19 21:26:17.413922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.625 [2024-11-19 21:26:17.413951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.884 [2024-11-19 21:26:17.433579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.884 [2024-11-19 21:26:17.433627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.884 [2024-11-19 21:26:17.433656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.884 [2024-11-19 21:26:17.450083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.884 [2024-11-19 21:26:17.450130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.884 [2024-11-19 21:26:17.450160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.884 [2024-11-19 21:26:17.471513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.884 [2024-11-19 21:26:17.471562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.884 [2024-11-19 21:26:17.471592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.884 [2024-11-19 21:26:17.492180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.884 [2024-11-19 21:26:17.492229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.884 [2024-11-19 21:26:17.492259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.884 [2024-11-19 21:26:17.513744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.884 [2024-11-19 21:26:17.513792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.884 [2024-11-19 21:26:17.513821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.884 [2024-11-19 21:26:17.530038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.884 [2024-11-19 21:26:17.530095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.884 [2024-11-19 21:26:17.530126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.884 [2024-11-19 21:26:17.549237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.884 [2024-11-19 21:26:17.549292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.884 [2024-11-19 21:26:17.549332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.884 [2024-11-19 21:26:17.565832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.884 [2024-11-19 21:26:17.565879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.884 [2024-11-19 21:26:17.565909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.884 [2024-11-19 21:26:17.584149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.884 [2024-11-19 21:26:17.584197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.884 [2024-11-19 21:26:17.584232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.884 [2024-11-19 21:26:17.601813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.884 [2024-11-19 21:26:17.601862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.884 [2024-11-19 21:26:17.601891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.884 [2024-11-19 21:26:17.620952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.884 [2024-11-19 21:26:17.621003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.884 [2024-11-19 21:26:17.621036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.884 [2024-11-19 21:26:17.645457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.884 [2024-11-19 21:26:17.645516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.884 [2024-11-19 21:26:17.645546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:43.884 [2024-11-19 21:26:17.663303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:43.884 [2024-11-19 21:26:17.663351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.884 [2024-11-19 21:26:17.663379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.143 [2024-11-19 21:26:17.679615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.143 [2024-11-19 21:26:17.679663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.143 [2024-11-19 21:26:17.679692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.143 [2024-11-19 21:26:17.697264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.143 [2024-11-19 21:26:17.697317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.143 [2024-11-19 21:26:17.697352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.143 [2024-11-19 21:26:17.717254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.143 [2024-11-19 21:26:17.717302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.143 [2024-11-19 21:26:17.717331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.143 [2024-11-19 21:26:17.735114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.143 [2024-11-19 21:26:17.735162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.143 [2024-11-19 21:26:17.735190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.143 [2024-11-19 21:26:17.753027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.143 [2024-11-19 21:26:17.753082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.143 [2024-11-19 21:26:17.753114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.143 [2024-11-19 21:26:17.770757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.143 [2024-11-19 21:26:17.770805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.143 [2024-11-19 21:26:17.770835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.143 [2024-11-19 21:26:17.788603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.143 [2024-11-19 21:26:17.788651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.143 [2024-11-19 21:26:17.788680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.143 [2024-11-19 21:26:17.806393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.143 [2024-11-19 21:26:17.806441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.143 [2024-11-19 21:26:17.806471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.143 [2024-11-19 21:26:17.824081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.143 [2024-11-19 21:26:17.824128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.143 [2024-11-19 21:26:17.824157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.143 [2024-11-19 21:26:17.840580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.143 [2024-11-19 21:26:17.840628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.143 [2024-11-19 21:26:17.840656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.143 [2024-11-19 21:26:17.861752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:44.143 [2024-11-19 21:26:17.861803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.143 [2024-11-19 21:26:17.861832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.143 13404.50 IOPS, 52.36 MiB/s 00:36:44.143 Latency(us) 00:36:44.143 [2024-11-19T20:26:17.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.143 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:44.143 nvme0n1 : 2.01 13425.50 52.44 0.00 0.00 9520.72 5072.97 26991.12 00:36:44.143 [2024-11-19T20:26:17.938Z] =================================================================================================================== 00:36:44.143 [2024-11-19T20:26:17.938Z] Total : 13425.50 52.44 0.00 0.00 9520.72 5072.97 26991.12 00:36:44.143 { 00:36:44.143 "results": [ 00:36:44.143 { 00:36:44.143 "job": "nvme0n1", 00:36:44.143 "core_mask": "0x2", 00:36:44.144 "workload": "randread", 00:36:44.144 "status": "finished", 00:36:44.144 "queue_depth": 128, 00:36:44.144 "io_size": 4096, 00:36:44.144 "runtime": 2.006406, 00:36:44.144 "iops": 13425.49812949124, 00:36:44.144 "mibps": 52.443352068325154, 00:36:44.144 "io_failed": 0, 00:36:44.144 "io_timeout": 0, 00:36:44.144 "avg_latency_us": 9520.722826127907, 00:36:44.144 "min_latency_us": 5072.971851851852, 00:36:44.144 "max_latency_us": 26991.122962962963 00:36:44.144 } 00:36:44.144 ], 00:36:44.144 "core_count": 1 00:36:44.144 } 00:36:44.144 21:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:44.144 21:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:44.144 21:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:44.144 | .driver_specific 00:36:44.144 | .nvme_error 00:36:44.144 | .status_code 00:36:44.144 | .command_transient_transport_error' 00:36:44.144 21:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:44.402 21:26:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 105 > 0 )) 00:36:44.402 21:26:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3161551 00:36:44.402 21:26:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3161551 ']' 00:36:44.402 21:26:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3161551 00:36:44.402 21:26:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:44.402 21:26:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:44.402 21:26:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3161551 00:36:44.661 21:26:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:44.661 21:26:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:44.661 21:26:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3161551' 00:36:44.661 killing process with pid 3161551 00:36:44.661 21:26:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3161551 00:36:44.661 Received shutdown signal, test time was about 2.000000 seconds 00:36:44.661 00:36:44.661 Latency(us) 00:36:44.661 [2024-11-19T20:26:18.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.661 [2024-11-19T20:26:18.456Z] =================================================================================================================== 00:36:44.661 [2024-11-19T20:26:18.456Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:44.661 21:26:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3161551 00:36:45.595 21:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:45.595 21:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:45.595 21:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:45.595 21:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:45.595 21:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:45.595 21:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3162093 00:36:45.595 21:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:45.595 21:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3162093 /var/tmp/bperf.sock 00:36:45.595 21:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3162093 ']' 00:36:45.595 21:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:45.595 21:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:45.595 21:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:45.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:45.595 21:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:45.595 21:26:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:45.595 [2024-11-19 21:26:19.206545] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:36:45.595 [2024-11-19 21:26:19.206679] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162093 ] 00:36:45.595 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:45.595 Zero copy mechanism will not be used. 00:36:45.595 [2024-11-19 21:26:19.352964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:45.852 [2024-11-19 21:26:19.492886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:46.786 21:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:46.786 21:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:46.786 21:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:46.786 21:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:46.786 21:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:46.786 21:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.786 21:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:46.786 21:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.786 21:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:46.786 21:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:47.352 nvme0n1 00:36:47.352 21:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:47.352 21:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.352 21:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:47.352 21:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.353 21:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:47.353 21:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:47.353 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:47.353 Zero copy mechanism will not be used. 00:36:47.353 Running I/O for 2 seconds... 00:36:47.353 [2024-11-19 21:26:21.052732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.353 [2024-11-19 21:26:21.052803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.353 [2024-11-19 21:26:21.052837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.353 [2024-11-19 21:26:21.060541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.353 [2024-11-19 21:26:21.060589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.353 [2024-11-19 21:26:21.060616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.353 [2024-11-19 21:26:21.068095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.353 [2024-11-19 21:26:21.068155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.353 [2024-11-19 21:26:21.068185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.353 [2024-11-19 21:26:21.075301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.353 [2024-11-19 21:26:21.075353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.353 [2024-11-19 21:26:21.075380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.353 [2024-11-19 21:26:21.083378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.353 [2024-11-19 21:26:21.083435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.353 [2024-11-19 21:26:21.083476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.353 [2024-11-19 21:26:21.092745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.353 [2024-11-19 21:26:21.092793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.353 [2024-11-19 21:26:21.092822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.353 [2024-11-19 21:26:21.102244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.353 [2024-11-19 21:26:21.102292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.353 [2024-11-19 21:26:21.102321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.353 [2024-11-19 21:26:21.107688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.353 [2024-11-19 21:26:21.107736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.353 [2024-11-19 21:26:21.107766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.353 [2024-11-19 21:26:21.115594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.353 [2024-11-19 21:26:21.115642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.353 [2024-11-19 21:26:21.115672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.353 [2024-11-19 21:26:21.123452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.353 [2024-11-19 21:26:21.123501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.353 [2024-11-19 21:26:21.123549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.353 [2024-11-19 21:26:21.131006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.353 [2024-11-19 21:26:21.131054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.353 [2024-11-19 21:26:21.131096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.353 [2024-11-19 21:26:21.139026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.353 [2024-11-19 21:26:21.139085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.353 [2024-11-19 21:26:21.139118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.613 [2024-11-19 21:26:21.146496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.613 [2024-11-19 21:26:21.146551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.613 [2024-11-19 21:26:21.146586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.613 [2024-11-19 21:26:21.153005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.613 [2024-11-19 21:26:21.153052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.613 [2024-11-19 21:26:21.153092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.613 [2024-11-19 21:26:21.160448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.613 [2024-11-19 21:26:21.160497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.613 [2024-11-19 21:26:21.160527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.613 [2024-11-19 21:26:21.164978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.613 [2024-11-19 21:26:21.165024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.613 [2024-11-19 21:26:21.165053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.613 [2024-11-19 21:26:21.173078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.613 [2024-11-19 21:26:21.173124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.613 [2024-11-19 21:26:21.173155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.613 [2024-11-19 21:26:21.182497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.613 [2024-11-19 21:26:21.182544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.613 [2024-11-19 21:26:21.182575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.613 [2024-11-19 21:26:21.190198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.613 [2024-11-19 21:26:21.190245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.613 [2024-11-19 21:26:21.190275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.613 [2024-11-19 21:26:21.197102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.613 [2024-11-19 21:26:21.197149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.613 [2024-11-19 21:26:21.197178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.613 [2024-11-19 21:26:21.204111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.613 [2024-11-19 21:26:21.204169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.613 [2024-11-19 21:26:21.204199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.613 [2024-11-19 21:26:21.212330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.613 [2024-11-19 21:26:21.212378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.613 [2024-11-19 21:26:21.212408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.613 [2024-11-19 21:26:21.220727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.613 [2024-11-19 21:26:21.220775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.613 [2024-11-19 21:26:21.220805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.613 [2024-11-19 21:26:21.229165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.613 [2024-11-19 21:26:21.229232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.613 [2024-11-19 21:26:21.229262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.613 [2024-11-19 21:26:21.238966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.613 [2024-11-19 21:26:21.239014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.613 [2024-11-19 21:26:21.239042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.613 [2024-11-19 21:26:21.246591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.613 [2024-11-19 21:26:21.246638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.246667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.253440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.253487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.253516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.259767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.259813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.259842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.266661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.266708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.266737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.273997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.274044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.274082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.281368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.281416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.281446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.288530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.288578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.288607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.295538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.295586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.295615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.302237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.302285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.302313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.307730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.307777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.307806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.314444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.314491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.314521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.322389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.322437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.322467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.330568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.330623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.330654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.337382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.337428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.337457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.344381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.344428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.344457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.351565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.351612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.351642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.358131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.358178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.358208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.365702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.365750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.365780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.374410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.374458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.374488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.381620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.381667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.381695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.388824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.388870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.388899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.396057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.396113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.396142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.614 [2024-11-19 21:26:21.403205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.614 [2024-11-19 21:26:21.403253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.614 [2024-11-19 21:26:21.403283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.874 [2024-11-19 21:26:21.410255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.874 [2024-11-19 21:26:21.410302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.874 [2024-11-19 21:26:21.410331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.874 [2024-11-19 21:26:21.416751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.874 [2024-11-19 21:26:21.416797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.874 [2024-11-19 21:26:21.416826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.874 [2024-11-19 21:26:21.424289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.874 [2024-11-19 21:26:21.424336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.874 [2024-11-19 21:26:21.424367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.874 [2024-11-19 21:26:21.433034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.874 [2024-11-19 21:26:21.433091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.874 [2024-11-19 21:26:21.433122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.874 [2024-11-19 21:26:21.440477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.874 [2024-11-19 21:26:21.440544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.874 [2024-11-19 21:26:21.440574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.874 [2024-11-19 21:26:21.447514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.874 [2024-11-19 21:26:21.447561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.874 [2024-11-19 21:26:21.447590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.874 [2024-11-19 21:26:21.452294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.874 [2024-11-19 21:26:21.452340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.874 [2024-11-19 21:26:21.452380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.874 [2024-11-19 21:26:21.457859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.874 [2024-11-19 21:26:21.457905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.874 [2024-11-19 21:26:21.457934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.874 [2024-11-19 21:26:21.465630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.874 [2024-11-19 21:26:21.465678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.874 [2024-11-19 21:26:21.465707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.874 [2024-11-19 21:26:21.473195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.874 [2024-11-19 21:26:21.473242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.874 [2024-11-19 21:26:21.473271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.874 [2024-11-19 21:26:21.480369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.874 [2024-11-19 21:26:21.480415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.874 [2024-11-19 21:26:21.480444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.874 [2024-11-19 21:26:21.487211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.874 [2024-11-19 21:26:21.487257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.874 [2024-11-19 21:26:21.487287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.874 [2024-11-19 21:26:21.491906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.874 [2024-11-19 21:26:21.491951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.874 [2024-11-19 21:26:21.491980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.874 [2024-11-19 21:26:21.497251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.874 [2024-11-19 21:26:21.497298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.874 [2024-11-19 21:26:21.497328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.874 [2024-11-19 21:26:21.504822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.874 [2024-11-19 21:26:21.504868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.874 [2024-11-19 21:26:21.504898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.874 [2024-11-19 21:26:21.512302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.874 [2024-11-19 21:26:21.512349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.874 [2024-11-19 21:26:21.512378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.874 [2024-11-19 21:26:21.519316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.874 [2024-11-19 21:26:21.519364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.874 [2024-11-19 21:26:21.519394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.874 [2024-11-19 21:26:21.526732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.874 [2024-11-19 21:26:21.526789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.874 [2024-11-19 21:26:21.526819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.874 [2024-11-19 21:26:21.534031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.875 [2024-11-19 21:26:21.534095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.875 [2024-11-19 21:26:21.534127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.875 [2024-11-19 21:26:21.541259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.875 [2024-11-19 21:26:21.541324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.875 [2024-11-19 21:26:21.541355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.875 [2024-11-19 21:26:21.548532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.875 [2024-11-19 21:26:21.548579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.875 [2024-11-19 21:26:21.548608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.875 [2024-11-19 21:26:21.555868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.875 [2024-11-19 21:26:21.555916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.875 [2024-11-19 21:26:21.555946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.875 [2024-11-19 21:26:21.563168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.875 [2024-11-19 21:26:21.563214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.875 [2024-11-19 21:26:21.563243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.875 [2024-11-19 21:26:21.570327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.875 [2024-11-19 21:26:21.570374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.875 [2024-11-19 21:26:21.570414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.875 [2024-11-19 21:26:21.576981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.875 [2024-11-19 21:26:21.577033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.875 [2024-11-19 21:26:21.577063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.875 [2024-11-19 21:26:21.581690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.875 [2024-11-19 21:26:21.581736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.875 [2024-11-19 21:26:21.581765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.875 [2024-11-19 21:26:21.586907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.875 [2024-11-19 21:26:21.586953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.875 [2024-11-19 21:26:21.586982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.875 [2024-11-19 21:26:21.594017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.875 [2024-11-19 21:26:21.594064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.875 [2024-11-19 21:26:21.594113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.875 [2024-11-19 21:26:21.600495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.875 [2024-11-19 21:26:21.600543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.875 [2024-11-19 21:26:21.600572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.875 [2024-11-19 21:26:21.606786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.875 [2024-11-19 21:26:21.606832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.875 [2024-11-19 21:26:21.606861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.875 [2024-11-19 21:26:21.613057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.875 [2024-11-19 21:26:21.613113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.875 [2024-11-19 21:26:21.613144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.875 [2024-11-19 21:26:21.619340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.875 [2024-11-19 21:26:21.619387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.875 [2024-11-19 21:26:21.619417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.875 [2024-11-19 21:26:21.625638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.875 [2024-11-19 21:26:21.625685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.875 [2024-11-19 21:26:21.625714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.875 [2024-11-19 21:26:21.631785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.875 [2024-11-19 21:26:21.631832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.875 [2024-11-19 21:26:21.631860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.875 [2024-11-19 21:26:21.638036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.875 [2024-11-19 21:26:21.638092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.875 [2024-11-19 21:26:21.638122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.875 [2024-11-19 21:26:21.644307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.875 [2024-11-19 21:26:21.644354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.875 [2024-11-19 21:26:21.644383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.875 [2024-11-19 21:26:21.650645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.875 [2024-11-19 21:26:21.650692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.875 [2024-11-19 21:26:21.650722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.875 [2024-11-19 21:26:21.656812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.875 [2024-11-19 21:26:21.656859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.875 [2024-11-19 21:26:21.656888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.875 [2024-11-19 21:26:21.662975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.875 [2024-11-19 21:26:21.663022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.875 [2024-11-19 21:26:21.663050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.669318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.669365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.669395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.675619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.675667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.675708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.682981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.683029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.683058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.691610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.691658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.691688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.699350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.699398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.699427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.706514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.706562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.706592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.713632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.713680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.713710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.719706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.719753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.719783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.726017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.726063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.726102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.732333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.732380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.732409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.738685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.738734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.738763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.745084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.745135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.745163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.751490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.751538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.751567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.757779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.757827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.757856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.764082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.764129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.764158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.770473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.770521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.770550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.776836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.776883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.776922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.783198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.783245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.783275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.789545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.789592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.789632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.795888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.795934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.795964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.802268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.802315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.802344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.808604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.808662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.808691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.814981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.815027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.815056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.821271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.821319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.821348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.827549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.827596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.135 [2024-11-19 21:26:21.827625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.135 [2024-11-19 21:26:21.833784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.135 [2024-11-19 21:26:21.833831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.136 [2024-11-19 21:26:21.833860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.136 [2024-11-19 21:26:21.840048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.136 [2024-11-19 21:26:21.840105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.136 [2024-11-19 21:26:21.840144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.136 [2024-11-19 21:26:21.846363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.136 [2024-11-19 21:26:21.846419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.136 [2024-11-19 21:26:21.846448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.136 [2024-11-19 21:26:21.853066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.136 [2024-11-19 21:26:21.853136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.136 [2024-11-19 21:26:21.853167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.136 [2024-11-19 21:26:21.859768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.136 [2024-11-19 21:26:21.859815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.136 [2024-11-19 21:26:21.859845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.136 [2024-11-19 21:26:21.867104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.136 [2024-11-19 21:26:21.867152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.136 [2024-11-19 21:26:21.867181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.136 [2024-11-19 21:26:21.874518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.136 [2024-11-19 21:26:21.874566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.136 [2024-11-19 21:26:21.874605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.136 [2024-11-19 21:26:21.882264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.136 [2024-11-19 21:26:21.882313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.136 [2024-11-19 21:26:21.882343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.136 [2024-11-19 21:26:21.889952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.136 [2024-11-19 21:26:21.889999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.136 [2024-11-19 21:26:21.890029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.136 [2024-11-19 21:26:21.897475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.136 [2024-11-19 21:26:21.897522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.136 [2024-11-19 21:26:21.897551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.136 [2024-11-19 21:26:21.903990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.136 [2024-11-19 21:26:21.904038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.136 [2024-11-19 21:26:21.904096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.136 [2024-11-19 21:26:21.908151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.136 [2024-11-19 21:26:21.908198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.136 [2024-11-19 21:26:21.908227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.136 [2024-11-19 21:26:21.914098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.136 [2024-11-19 21:26:21.914149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.136 [2024-11-19 21:26:21.914178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.136 [2024-11-19 21:26:21.920870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.136 [2024-11-19 21:26:21.920917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.136 [2024-11-19 21:26:21.920947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.136 [2024-11-19 21:26:21.927265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.136 [2024-11-19 21:26:21.927313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.136 [2024-11-19 21:26:21.927342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.395 [2024-11-19 21:26:21.933549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.395 [2024-11-19 21:26:21.933597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.395 [2024-11-19 21:26:21.933625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.395 [2024-11-19 21:26:21.939787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.395 [2024-11-19 21:26:21.939833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.395 [2024-11-19 21:26:21.939862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.395 [2024-11-19 21:26:21.945911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.395 [2024-11-19 21:26:21.945958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.395 [2024-11-19 21:26:21.945987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.395 [2024-11-19 21:26:21.952251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.395 [2024-11-19 21:26:21.952298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.395 [2024-11-19 21:26:21.952329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.395 [2024-11-19 21:26:21.958371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.395 [2024-11-19 21:26:21.958428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.395 [2024-11-19 21:26:21.958458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.395 [2024-11-19 21:26:21.964618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.395 [2024-11-19 21:26:21.964665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.395 [2024-11-19 21:26:21.964694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.395 [2024-11-19 21:26:21.971037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.395 [2024-11-19 21:26:21.971094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:21.971125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:21.977365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:21.977412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:21.977441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:21.983740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:21.983787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:21.983817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:21.990051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:21.990119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:21.990149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:21.996477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:21.996524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:21.996573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:22.002824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.002872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:22.002901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:22.009178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.009226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:22.009265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:22.015581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.015629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:22.015658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:22.021934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.021981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:22.022010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:22.028344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.028401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:22.028429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:22.034562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.034608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:22.034638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:22.040838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.040884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:22.040913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:22.047575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.047622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:22.047652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.396 4505.00 IOPS, 563.12 MiB/s [2024-11-19T20:26:22.191Z] [2024-11-19 21:26:22.056119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.056167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:22.056196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:22.062397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.062444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:22.062474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:22.068628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.068686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:22.068717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:22.074926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.074973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:22.075003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:22.081967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.082015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:22.082045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:22.088668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.088715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:22.088744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:22.096135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.096183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:22.096213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:22.104109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.104158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:22.104188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:22.111582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.111630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:22.111661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:22.119378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.119426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:22.119455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:22.126862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.126910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:22.126952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:22.135110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.135158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:22.135188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:22.142892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.142940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.396 [2024-11-19 21:26:22.142969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.396 [2024-11-19 21:26:22.147874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.396 [2024-11-19 21:26:22.147921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.397 [2024-11-19 21:26:22.147951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.397 [2024-11-19 21:26:22.157017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.397 [2024-11-19 21:26:22.157067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.397 [2024-11-19 21:26:22.157108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.397 [2024-11-19 21:26:22.166131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.397 [2024-11-19 21:26:22.166179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.397 [2024-11-19 21:26:22.166209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.397 [2024-11-19 21:26:22.175234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.397 [2024-11-19 21:26:22.175282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.397 [2024-11-19 21:26:22.175312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.397 [2024-11-19 21:26:22.184027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.397 [2024-11-19 21:26:22.184086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.397 [2024-11-19 21:26:22.184118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.192795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.656 [2024-11-19 21:26:22.192845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.656 [2024-11-19 21:26:22.192875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.201691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.656 [2024-11-19 21:26:22.201752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.656 [2024-11-19 21:26:22.201782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.210903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.656 [2024-11-19 21:26:22.210951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.656 [2024-11-19 21:26:22.210981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.220266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.656 [2024-11-19 21:26:22.220315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.656 [2024-11-19 21:26:22.220345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.229146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.656 [2024-11-19 21:26:22.229194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.656 [2024-11-19 21:26:22.229224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.237912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.656 [2024-11-19 21:26:22.237960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.656 [2024-11-19 21:26:22.237990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.246701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.656 [2024-11-19 21:26:22.246749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.656 [2024-11-19 21:26:22.246778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.255518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.656 [2024-11-19 21:26:22.255566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.656 [2024-11-19 21:26:22.255596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.264331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.656 [2024-11-19 21:26:22.264379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.656 [2024-11-19 21:26:22.264409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.272984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.656 [2024-11-19 21:26:22.273032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.656 [2024-11-19 21:26:22.273062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.281756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.656 [2024-11-19 21:26:22.281804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.656 [2024-11-19 21:26:22.281833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.290461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.656 [2024-11-19 21:26:22.290508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.656 [2024-11-19 21:26:22.290537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.299187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.656 [2024-11-19 21:26:22.299233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.656 [2024-11-19 21:26:22.299263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.306677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.656 [2024-11-19 21:26:22.306725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.656 [2024-11-19 21:26:22.306755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.312153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.656 [2024-11-19 21:26:22.312200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.656 [2024-11-19 21:26:22.312230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.316044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.656 [2024-11-19 21:26:22.316100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.656 [2024-11-19 21:26:22.316131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.321392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.656 [2024-11-19 21:26:22.321454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.656 [2024-11-19 21:26:22.321484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.327623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.656 [2024-11-19 21:26:22.327669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.656 [2024-11-19 21:26:22.327699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.334799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.656 [2024-11-19 21:26:22.334857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.656 [2024-11-19 21:26:22.334887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.343479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.656 [2024-11-19 21:26:22.343528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.656 [2024-11-19 21:26:22.343558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.352310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.656 [2024-11-19 21:26:22.352358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.656 [2024-11-19 21:26:22.352387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.656 [2024-11-19 21:26:22.360996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.657 [2024-11-19 21:26:22.361043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.657 [2024-11-19 21:26:22.361081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.657 [2024-11-19 21:26:22.369862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.657 [2024-11-19 21:26:22.369909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.657 [2024-11-19 21:26:22.369939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.657 [2024-11-19 21:26:22.378599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.657 [2024-11-19 21:26:22.378647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.657 [2024-11-19 21:26:22.378677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.657 [2024-11-19 21:26:22.387433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.657 [2024-11-19 21:26:22.387480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.657 [2024-11-19 21:26:22.387510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.657 [2024-11-19 21:26:22.396315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.657 [2024-11-19 21:26:22.396363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.657 [2024-11-19 21:26:22.396393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.657 [2024-11-19 21:26:22.405153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.657 [2024-11-19 21:26:22.405201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.657 [2024-11-19 21:26:22.405232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.657 [2024-11-19 21:26:22.413955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.657 [2024-11-19 21:26:22.414001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.657 [2024-11-19 21:26:22.414030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.657 [2024-11-19 21:26:22.422772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.657 [2024-11-19 21:26:22.422820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.657 [2024-11-19 21:26:22.422849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.657 [2024-11-19 21:26:22.431557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.657 [2024-11-19 21:26:22.431605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.657 [2024-11-19 21:26:22.431635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.657 [2024-11-19 21:26:22.440346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.657 [2024-11-19 21:26:22.440394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.657 [2024-11-19 21:26:22.440424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.916 [2024-11-19 21:26:22.449171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.916 [2024-11-19 21:26:22.449221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.916 [2024-11-19 21:26:22.449250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.916 [2024-11-19 21:26:22.457879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.916 [2024-11-19 21:26:22.457928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.916 [2024-11-19 21:26:22.457958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.916 [2024-11-19 21:26:22.466761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.916 [2024-11-19 21:26:22.466810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.916 [2024-11-19 21:26:22.466840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.916 [2024-11-19 21:26:22.475314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.916 [2024-11-19 21:26:22.475362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.916 [2024-11-19 21:26:22.475392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.916 [2024-11-19 21:26:22.484217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.916 [2024-11-19 21:26:22.484277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.916 [2024-11-19 21:26:22.484308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.916 [2024-11-19 21:26:22.492325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.916 [2024-11-19 21:26:22.492373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.916 [2024-11-19 21:26:22.492402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.916 [2024-11-19 21:26:22.497572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.916 [2024-11-19 21:26:22.497619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.916 [2024-11-19 21:26:22.497649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.916 [2024-11-19 21:26:22.504049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.916 [2024-11-19 21:26:22.504104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.916 [2024-11-19 21:26:22.504135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.916 [2024-11-19 21:26:22.509534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.509580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.509609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.513730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.513776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.513806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.518548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.518593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.518623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.524311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.524358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.524388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.529120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.529167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.529197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.536331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.536379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.536409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.545168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.545224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.545262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.552890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.552944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.552973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.559999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.560046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.560087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.566213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.566262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.566291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.575512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.575562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.575593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.585460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.585509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.585539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.595284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.595343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.595373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.605246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.605298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.605348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.614896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.614949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.614978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.624414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.624465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.624494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.634335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.634388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.634419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.644206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.644258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.644289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.653492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.653543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.653573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.663258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.663309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.663340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.673007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.673058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.673100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.682407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.682457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.682488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.690179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.690228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.690258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.697201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.697249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.697279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.917 [2024-11-19 21:26:22.703813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.917 [2024-11-19 21:26:22.703861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.917 [2024-11-19 21:26:22.703891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:49.176 [2024-11-19 21:26:22.711519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.176 [2024-11-19 21:26:22.711572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.176 [2024-11-19 21:26:22.711602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:49.176 [2024-11-19 21:26:22.718986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.176 [2024-11-19 21:26:22.719037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.176 [2024-11-19 21:26:22.719067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:49.176 [2024-11-19 21:26:22.727616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.176 [2024-11-19 21:26:22.727666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.176 [2024-11-19 21:26:22.727696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:49.176 [2024-11-19 21:26:22.736524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.736574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.736604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.745855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.745905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.745935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.755335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.755385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.755431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.764163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.764212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.764242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.772978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.773027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.773057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.781741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.781792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.781822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.790525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.790575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.790605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.799253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.799301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.799331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.808222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.808278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.808309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.816021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.816079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.816112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.820954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.821002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.821032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.829654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.829704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.829734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.839132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.839184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.839213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.848468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.848518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.848549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.856868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.856919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.856949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.864159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.864210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.864241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.869041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.869096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.869127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.875055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.875112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.875142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.881877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.881934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.881967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.889423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.889473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.889517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.897026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.897088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.897122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.904463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.904514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.904544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.911807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.911857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.911887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.919692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.919742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.919772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.927686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.927737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.927768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.935860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.935913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.177 [2024-11-19 21:26:22.935944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:49.177 [2024-11-19 21:26:22.943717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.177 [2024-11-19 21:26:22.943768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.178 [2024-11-19 21:26:22.943798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:49.178 [2024-11-19 21:26:22.951297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.178 [2024-11-19 21:26:22.951347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.178 [2024-11-19 21:26:22.951377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:49.178 [2024-11-19 21:26:22.959108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.178 [2024-11-19 21:26:22.959159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.178 [2024-11-19 21:26:22.959189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:49.178 [2024-11-19 21:26:22.967380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.178 [2024-11-19 21:26:22.967432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.178 [2024-11-19 21:26:22.967462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:49.436 [2024-11-19 21:26:22.975756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.436 [2024-11-19 21:26:22.975808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.436 [2024-11-19 21:26:22.975838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:49.436 [2024-11-19 21:26:22.983642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.436 [2024-11-19 21:26:22.983692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.436 [2024-11-19 21:26:22.983723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:49.436 [2024-11-19 21:26:22.991173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.436 [2024-11-19 21:26:22.991225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.436 [2024-11-19 21:26:22.991254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:49.436 [2024-11-19 21:26:22.998928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.436 [2024-11-19 21:26:22.998979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.436 [2024-11-19 21:26:22.999008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:49.436 [2024-11-19 21:26:23.003703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.436 [2024-11-19 21:26:23.003750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.436 [2024-11-19 21:26:23.003800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:49.436 [2024-11-19 21:26:23.012421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.436 [2024-11-19 21:26:23.012471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.436 [2024-11-19 21:26:23.012501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:49.436 [2024-11-19 21:26:23.020902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.436 [2024-11-19 21:26:23.020953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.436 [2024-11-19 21:26:23.020996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:49.436 [2024-11-19 21:26:23.030273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.436 [2024-11-19 21:26:23.030323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.436 [2024-11-19 21:26:23.030354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:49.436 [2024-11-19 21:26:23.039508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.436 [2024-11-19 21:26:23.039558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.436 [2024-11-19 21:26:23.039589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:49.436 [2024-11-19 21:26:23.049176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:49.436 [2024-11-19 21:26:23.049237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.436 [2024-11-19 21:26:23.049268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:49.436 4191.00 IOPS, 523.88 MiB/s 00:36:49.436 Latency(us) 00:36:49.436 [2024-11-19T20:26:23.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:49.436 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:49.436 nvme0n1 : 2.00 4193.12 524.14 0.00 0.00 3808.83 1146.88 10631.40 00:36:49.436 [2024-11-19T20:26:23.232Z] =================================================================================================================== 00:36:49.437 [2024-11-19T20:26:23.232Z] Total : 4193.12 524.14 0.00 0.00 3808.83 1146.88 10631.40 00:36:49.437 { 00:36:49.437 "results": [ 00:36:49.437 { 00:36:49.437 "job": "nvme0n1", 00:36:49.437 "core_mask": "0x2", 00:36:49.437 "workload": "randread", 00:36:49.437 "status": "finished", 00:36:49.437 "queue_depth": 16, 00:36:49.437 "io_size": 131072, 00:36:49.437 "runtime": 2.002803, 00:36:49.437 "iops": 4193.123337642294, 00:36:49.437 "mibps": 524.1404172052868, 00:36:49.437 "io_failed": 0, 00:36:49.437 "io_timeout": 0, 00:36:49.437 "avg_latency_us": 3808.8316449242766, 00:36:49.437 "min_latency_us": 1146.88, 00:36:49.437 "max_latency_us": 10631.395555555555 00:36:49.437 } 00:36:49.437 ], 00:36:49.437 "core_count": 1 00:36:49.437 } 00:36:49.437 21:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:49.437 21:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:49.437 21:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:49.437 | .driver_specific 00:36:49.437 | .nvme_error 00:36:49.437 | .status_code 00:36:49.437 | .command_transient_transport_error' 00:36:49.437 21:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:49.695 21:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 271 > 0 )) 00:36:49.695 21:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3162093 00:36:49.695 21:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3162093 ']' 00:36:49.695 21:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3162093 00:36:49.695 21:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:49.695 21:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:49.695 21:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3162093 00:36:49.695 21:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:49.695 21:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:49.695 21:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3162093' 00:36:49.695 killing process with pid 3162093 00:36:49.695 21:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3162093 00:36:49.695 Received shutdown signal, test time was about 2.000000 seconds 00:36:49.695 00:36:49.695 Latency(us) 00:36:49.695 [2024-11-19T20:26:23.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:49.695 [2024-11-19T20:26:23.490Z] =================================================================================================================== 00:36:49.695 [2024-11-19T20:26:23.490Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:49.695 21:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3162093 00:36:50.630 21:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:50.630 21:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:50.630 21:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:50.630 21:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:50.630 21:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:50.630 21:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3162753 00:36:50.630 21:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:50.630 21:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3162753 /var/tmp/bperf.sock 00:36:50.630 21:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3162753 ']' 00:36:50.630 21:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:50.630 21:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:50.630 21:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:50.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:50.630 21:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:50.630 21:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:50.630 [2024-11-19 21:26:24.398589] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:36:50.630 [2024-11-19 21:26:24.398722] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162753 ] 00:36:50.888 [2024-11-19 21:26:24.539991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:50.888 [2024-11-19 21:26:24.680266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:51.823 21:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:51.823 21:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:51.823 21:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:51.824 21:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:52.083 21:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:52.084 21:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.084 21:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:52.084 21:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.084 21:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:52.084 21:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:52.399 nvme0n1 00:36:52.399 21:26:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:52.399 21:26:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.399 21:26:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:52.399 21:26:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.399 21:26:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:52.399 21:26:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:52.399 Running I/O for 2 seconds... 00:36:52.685 [2024-11-19 21:26:26.173206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beee38 00:36:52.685 [2024-11-19 21:26:26.174985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.685 [2024-11-19 21:26:26.175052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:52.685 [2024-11-19 21:26:26.188481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf118 00:36:52.685 [2024-11-19 21:26:26.190329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.685 [2024-11-19 21:26:26.190387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:52.685 [2024-11-19 21:26:26.206571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef6a8 00:36:52.685 [2024-11-19 21:26:26.208518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.685 [2024-11-19 21:26:26.208576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:52.685 [2024-11-19 21:26:26.223538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf46d0 00:36:52.685 [2024-11-19 21:26:26.224871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.685 [2024-11-19 21:26:26.224911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:52.685 [2024-11-19 21:26:26.238953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb480 00:36:52.685 [2024-11-19 21:26:26.240086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.685 [2024-11-19 21:26:26.240162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:52.685 [2024-11-19 21:26:26.258445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bedd58 00:36:52.685 [2024-11-19 21:26:26.260945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.685 [2024-11-19 21:26:26.260989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:52.685 [2024-11-19 21:26:26.270257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016becc78 00:36:52.685 [2024-11-19 21:26:26.271511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.685 [2024-11-19 21:26:26.271554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:52.685 [2024-11-19 21:26:26.286493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be73e0 00:36:52.685 [2024-11-19 21:26:26.287740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.685 [2024-11-19 21:26:26.287783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:52.685 [2024-11-19 21:26:26.302834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa7d8 00:36:52.685 [2024-11-19 21:26:26.303680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.685 [2024-11-19 21:26:26.303728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:52.685 [2024-11-19 21:26:26.318795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0ff8 00:36:52.685 [2024-11-19 21:26:26.320061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.685 [2024-11-19 21:26:26.320130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:52.685 [2024-11-19 21:26:26.333958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdece0 00:36:52.685 [2024-11-19 21:26:26.335212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.685 [2024-11-19 21:26:26.335256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:52.685 [2024-11-19 21:26:26.354059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfbcf0 00:36:52.685 [2024-11-19 21:26:26.356145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.685 [2024-11-19 21:26:26.356189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:52.685 [2024-11-19 21:26:26.370145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5658 00:36:52.685 [2024-11-19 21:26:26.372245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.685 [2024-11-19 21:26:26.372289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:52.685 [2024-11-19 21:26:26.383862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:36:52.685 [2024-11-19 21:26:26.385173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.685 [2024-11-19 21:26:26.385216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:52.685 [2024-11-19 21:26:26.400383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4de8 00:36:52.685 [2024-11-19 21:26:26.402088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.685 [2024-11-19 21:26:26.402145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:52.685 [2024-11-19 21:26:26.415211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:36:52.685 [2024-11-19 21:26:26.416301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.685 [2024-11-19 21:26:26.416344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:52.686 [2024-11-19 21:26:26.433031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfac10 00:36:52.686 [2024-11-19 21:26:26.434956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.686 [2024-11-19 21:26:26.435001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:52.686 [2024-11-19 21:26:26.448295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6738 00:36:52.686 [2024-11-19 21:26:26.450055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.686 [2024-11-19 21:26:26.450120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:52.686 [2024-11-19 21:26:26.464271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be12d8 00:36:52.686 [2024-11-19 21:26:26.465605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.686 [2024-11-19 21:26:26.465649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:52.945 [2024-11-19 21:26:26.480982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1868 00:36:52.945 [2024-11-19 21:26:26.482775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.945 [2024-11-19 21:26:26.482819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:52.945 [2024-11-19 21:26:26.497486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4f40 00:36:52.945 [2024-11-19 21:26:26.498626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.945 [2024-11-19 21:26:26.498670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:52.945 [2024-11-19 21:26:26.516712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0630 00:36:52.945 [2024-11-19 21:26:26.519310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.945 [2024-11-19 21:26:26.519363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:52.945 [2024-11-19 21:26:26.527949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde470 00:36:52.945 [2024-11-19 21:26:26.529107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.945 [2024-11-19 21:26:26.529152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:52.945 [2024-11-19 21:26:26.548199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016befae0 00:36:52.945 [2024-11-19 21:26:26.550645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.945 [2024-11-19 21:26:26.550689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:52.945 [2024-11-19 21:26:26.559984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb480 00:36:52.945 [2024-11-19 21:26:26.561325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.945 [2024-11-19 21:26:26.561371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.945 [2024-11-19 21:26:26.581059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be27f0 00:36:52.945 [2024-11-19 21:26:26.583709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.945 [2024-11-19 21:26:26.583754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:52.945 [2024-11-19 21:26:26.592921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebfd0 00:36:52.945 [2024-11-19 21:26:26.594295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.945 [2024-11-19 21:26:26.594338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:52.945 [2024-11-19 21:26:26.609140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beee38 00:36:52.945 [2024-11-19 21:26:26.610514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.945 [2024-11-19 21:26:26.610557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:52.945 [2024-11-19 21:26:26.625523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd640 00:36:52.945 [2024-11-19 21:26:26.626474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.945 [2024-11-19 21:26:26.626517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:52.945 [2024-11-19 21:26:26.641832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea248 00:36:52.945 [2024-11-19 21:26:26.643416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.945 [2024-11-19 21:26:26.643459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:52.945 [2024-11-19 21:26:26.659510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be73e0 00:36:52.945 [2024-11-19 21:26:26.661315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.945 [2024-11-19 21:26:26.661359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:52.945 [2024-11-19 21:26:26.674577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0ea0 00:36:52.945 [2024-11-19 21:26:26.676625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.945 [2024-11-19 21:26:26.676669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:52.945 [2024-11-19 21:26:26.691114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bee190 00:36:52.945 [2024-11-19 21:26:26.692928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.945 [2024-11-19 21:26:26.692972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:52.945 [2024-11-19 21:26:26.711156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6b70 00:36:52.945 [2024-11-19 21:26:26.713847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.945 [2024-11-19 21:26:26.713891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:52.945 [2024-11-19 21:26:26.722946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb8b8 00:36:52.945 [2024-11-19 21:26:26.724386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.945 [2024-11-19 21:26:26.724429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:53.204 [2024-11-19 21:26:26.743303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4298 00:36:53.204 [2024-11-19 21:26:26.745622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.204 [2024-11-19 21:26:26.745665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:53.204 [2024-11-19 21:26:26.754901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5a90 00:36:53.204 [2024-11-19 21:26:26.756027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.204 [2024-11-19 21:26:26.756079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:53.204 [2024-11-19 21:26:26.771119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8088 00:36:53.204 [2024-11-19 21:26:26.772227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.204 [2024-11-19 21:26:26.772270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:53.204 [2024-11-19 21:26:26.787734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea248 00:36:53.204 [2024-11-19 21:26:26.788829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.204 [2024-11-19 21:26:26.788873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:53.204 [2024-11-19 21:26:26.805605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beee38 00:36:53.204 [2024-11-19 21:26:26.807556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.204 [2024-11-19 21:26:26.807600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:53.204 [2024-11-19 21:26:26.820432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea680 00:36:53.204 [2024-11-19 21:26:26.821767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.204 [2024-11-19 21:26:26.821810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:53.204 [2024-11-19 21:26:26.838138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1868 00:36:53.204 [2024-11-19 21:26:26.840290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.204 [2024-11-19 21:26:26.840333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:53.204 [2024-11-19 21:26:26.852996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2948 00:36:53.204 [2024-11-19 21:26:26.854526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.204 [2024-11-19 21:26:26.854570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:53.204 [2024-11-19 21:26:26.867861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb048 00:36:53.204 [2024-11-19 21:26:26.869660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.204 [2024-11-19 21:26:26.869703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:53.204 [2024-11-19 21:26:26.884277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3498 00:36:53.204 [2024-11-19 21:26:26.885849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.204 [2024-11-19 21:26:26.885892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:53.204 [2024-11-19 21:26:26.900643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6458 00:36:53.204 [2024-11-19 21:26:26.901626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.204 [2024-11-19 21:26:26.901674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.204 [2024-11-19 21:26:26.920416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfcdd0 00:36:53.204 [2024-11-19 21:26:26.923058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.204 [2024-11-19 21:26:26.923111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:53.204 [2024-11-19 21:26:26.932187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3d08 00:36:53.204 [2024-11-19 21:26:26.933598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.204 [2024-11-19 21:26:26.933641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:53.204 [2024-11-19 21:26:26.952150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf9f68 00:36:53.204 [2024-11-19 21:26:26.954413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.204 [2024-11-19 21:26:26.954457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:53.204 [2024-11-19 21:26:26.963841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bee5c8 00:36:53.204 [2024-11-19 21:26:26.964850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.204 [2024-11-19 21:26:26.964894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:53.204 [2024-11-19 21:26:26.983741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3060 00:36:53.204 [2024-11-19 21:26:26.985628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.204 [2024-11-19 21:26:26.985671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:53.463 [2024-11-19 21:26:27.000376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf31b8 00:36:53.463 [2024-11-19 21:26:27.002474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.463 [2024-11-19 21:26:27.002517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:53.463 [2024-11-19 21:26:27.016533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb760 00:36:53.463 [2024-11-19 21:26:27.018639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.463 [2024-11-19 21:26:27.018682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:53.463 [2024-11-19 21:26:27.028284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2510 00:36:53.463 [2024-11-19 21:26:27.029316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.463 [2024-11-19 21:26:27.029359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:53.463 [2024-11-19 21:26:27.045952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be38d0 00:36:53.463 [2024-11-19 21:26:27.047226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.463 [2024-11-19 21:26:27.047270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:53.463 [2024-11-19 21:26:27.062331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa7d8 00:36:53.463 [2024-11-19 21:26:27.063827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.463 [2024-11-19 21:26:27.063870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:53.463 [2024-11-19 21:26:27.077357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef270 00:36:53.463 [2024-11-19 21:26:27.078639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.463 [2024-11-19 21:26:27.078682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:53.463 [2024-11-19 21:26:27.093916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3498 00:36:53.463 [2024-11-19 21:26:27.095601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.463 [2024-11-19 21:26:27.095644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:53.463 [2024-11-19 21:26:27.113915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfdeb0 00:36:53.463 [2024-11-19 21:26:27.116473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.463 [2024-11-19 21:26:27.116517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:53.463 [2024-11-19 21:26:27.125668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfbcf0 00:36:53.463 [2024-11-19 21:26:27.126975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.463 [2024-11-19 21:26:27.127018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:53.463 [2024-11-19 21:26:27.145664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8d30 00:36:53.463 [2024-11-19 21:26:27.147829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.464 [2024-11-19 21:26:27.147873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:53.464 [2024-11-19 21:26:27.159875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016becc78 00:36:53.464 15679.00 IOPS, 61.25 MiB/s [2024-11-19T20:26:27.259Z] [2024-11-19 21:26:27.162228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.464 [2024-11-19 21:26:27.162277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:53.464 [2024-11-19 21:26:27.176771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0630 00:36:53.464 [2024-11-19 21:26:27.178526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.464 [2024-11-19 21:26:27.178569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:53.464 [2024-11-19 21:26:27.193149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfda78 00:36:53.464 [2024-11-19 21:26:27.194303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.464 [2024-11-19 21:26:27.194351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:53.464 [2024-11-19 21:26:27.209089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed4e8 00:36:53.464 [2024-11-19 21:26:27.210292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.464 [2024-11-19 21:26:27.210346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:53.464 [2024-11-19 21:26:27.224365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0ff8 00:36:53.464 [2024-11-19 21:26:27.226177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.464 [2024-11-19 21:26:27.226220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:53.464 [2024-11-19 21:26:27.240767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6020 00:36:53.464 [2024-11-19 21:26:27.242318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.464 [2024-11-19 21:26:27.242360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:53.723 [2024-11-19 21:26:27.257139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1ca0 00:36:53.723 [2024-11-19 21:26:27.258064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.723 [2024-11-19 21:26:27.258117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:53.723 [2024-11-19 21:26:27.273575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8088 00:36:53.723 [2024-11-19 21:26:27.275151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.723 [2024-11-19 21:26:27.275194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:53.723 [2024-11-19 21:26:27.290056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdfdc0 00:36:53.723 [2024-11-19 21:26:27.291055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.723 [2024-11-19 21:26:27.291112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:53.723 [2024-11-19 21:26:27.309853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be2c28 00:36:53.723 [2024-11-19 21:26:27.312510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.723 [2024-11-19 21:26:27.312553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:53.723 [2024-11-19 21:26:27.321688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6cc8 00:36:53.723 [2024-11-19 21:26:27.323106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.723 [2024-11-19 21:26:27.323150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:53.723 [2024-11-19 21:26:27.338241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be12d8 00:36:53.723 [2024-11-19 21:26:27.339677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.724 [2024-11-19 21:26:27.339723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:53.724 [2024-11-19 21:26:27.359020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9168 00:36:53.724 [2024-11-19 21:26:27.361400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.724 [2024-11-19 21:26:27.361445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:53.724 [2024-11-19 21:26:27.375814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed0b0 00:36:53.724 [2024-11-19 21:26:27.378110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.724 [2024-11-19 21:26:27.378155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:53.724 [2024-11-19 21:26:27.390528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc560 00:36:53.724 [2024-11-19 21:26:27.392692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.724 [2024-11-19 21:26:27.392736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:53.724 [2024-11-19 21:26:27.407359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb328 00:36:53.724 [2024-11-19 21:26:27.409046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.724 [2024-11-19 21:26:27.409102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:53.724 [2024-11-19 21:26:27.421613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6fa8 00:36:53.724 [2024-11-19 21:26:27.423351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.724 [2024-11-19 21:26:27.423390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:53.724 [2024-11-19 21:26:27.436904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde470 00:36:53.724 [2024-11-19 21:26:27.438092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.724 [2024-11-19 21:26:27.438146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:53.724 [2024-11-19 21:26:27.451711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed4e8 00:36:53.724 [2024-11-19 21:26:27.452633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.724 [2024-11-19 21:26:27.452672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:53.724 [2024-11-19 21:26:27.469189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfbcf0 00:36:53.724 [2024-11-19 21:26:27.471067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.724 [2024-11-19 21:26:27.471115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:53.724 [2024-11-19 21:26:27.485319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be23b8 00:36:53.724 [2024-11-19 21:26:27.487587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.724 [2024-11-19 21:26:27.487641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:53.724 [2024-11-19 21:26:27.496765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfbcf0 00:36:53.724 [2024-11-19 21:26:27.497892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.724 [2024-11-19 21:26:27.497933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:53.724 [2024-11-19 21:26:27.515795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf46d0 00:36:53.982 [2024-11-19 21:26:27.517506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.982 [2024-11-19 21:26:27.517560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:53.982 [2024-11-19 21:26:27.530312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4140 00:36:53.982 [2024-11-19 21:26:27.531897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.982 [2024-11-19 21:26:27.531937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:53.982 [2024-11-19 21:26:27.546029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf81e0 00:36:53.982 [2024-11-19 21:26:27.548052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.982 [2024-11-19 21:26:27.548101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:53.982 [2024-11-19 21:26:27.561736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa3a0 00:36:53.982 [2024-11-19 21:26:27.563102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.983 [2024-11-19 21:26:27.563144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:53.983 [2024-11-19 21:26:27.575649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0630 00:36:53.983 [2024-11-19 21:26:27.577250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.983 [2024-11-19 21:26:27.577290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:53.983 [2024-11-19 21:26:27.590768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4b08 00:36:53.983 [2024-11-19 21:26:27.592037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.983 [2024-11-19 21:26:27.592086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:53.983 [2024-11-19 21:26:27.606568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2948 00:36:53.983 [2024-11-19 21:26:27.608260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.983 [2024-11-19 21:26:27.608301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:53.983 [2024-11-19 21:26:27.625813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0ea0 00:36:53.983 [2024-11-19 21:26:27.628338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.983 [2024-11-19 21:26:27.628380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:53.983 [2024-11-19 21:26:27.637501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6890 00:36:53.983 [2024-11-19 21:26:27.638965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.983 [2024-11-19 21:26:27.639005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:53.983 [2024-11-19 21:26:27.653614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4298 00:36:53.983 [2024-11-19 21:26:27.654364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.983 [2024-11-19 21:26:27.654404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:53.983 [2024-11-19 21:26:27.672935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9168 00:36:53.983 [2024-11-19 21:26:27.675272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.983 [2024-11-19 21:26:27.675314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:53.983 [2024-11-19 21:26:27.684088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8a50 00:36:53.983 [2024-11-19 21:26:27.685274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.983 [2024-11-19 21:26:27.685329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:53.983 [2024-11-19 21:26:27.699571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3a28 00:36:53.983 [2024-11-19 21:26:27.700747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.983 [2024-11-19 21:26:27.700786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:53.983 [2024-11-19 21:26:27.715182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea248 00:36:53.983 [2024-11-19 21:26:27.716012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.983 [2024-11-19 21:26:27.716053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:53.983 [2024-11-19 21:26:27.730892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed0b0 00:36:53.983 [2024-11-19 21:26:27.732130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.983 [2024-11-19 21:26:27.732170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:53.983 [2024-11-19 21:26:27.747308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beea00 00:36:53.983 [2024-11-19 21:26:27.748921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.983 [2024-11-19 21:26:27.748961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:53.983 [2024-11-19 21:26:27.766526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd208 00:36:53.983 [2024-11-19 21:26:27.768907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.983 [2024-11-19 21:26:27.768948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:54.242 [2024-11-19 21:26:27.777710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0ff8 00:36:54.242 [2024-11-19 21:26:27.778996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.242 [2024-11-19 21:26:27.779034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:54.242 [2024-11-19 21:26:27.794665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8088 00:36:54.242 [2024-11-19 21:26:27.796116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.242 [2024-11-19 21:26:27.796156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:54.242 [2024-11-19 21:26:27.808741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde470 00:36:54.242 [2024-11-19 21:26:27.810151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.242 [2024-11-19 21:26:27.810206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:54.242 [2024-11-19 21:26:27.824188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdece0 00:36:54.242 [2024-11-19 21:26:27.825094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.242 [2024-11-19 21:26:27.825134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:54.242 [2024-11-19 21:26:27.839538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beea00 00:36:54.242 [2024-11-19 21:26:27.840933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.242 [2024-11-19 21:26:27.840973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:54.242 [2024-11-19 21:26:27.858235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0350 00:36:54.242 [2024-11-19 21:26:27.860409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.242 [2024-11-19 21:26:27.860464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:54.242 [2024-11-19 21:26:27.869210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2948 00:36:54.242 [2024-11-19 21:26:27.870264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.242 [2024-11-19 21:26:27.870302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:54.242 [2024-11-19 21:26:27.887938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf20d8 00:36:54.242 [2024-11-19 21:26:27.889637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.242 [2024-11-19 21:26:27.889687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:54.242 [2024-11-19 21:26:27.902842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3e60 00:36:54.242 [2024-11-19 21:26:27.904711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.242 [2024-11-19 21:26:27.904751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:54.242 [2024-11-19 21:26:27.917159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be84c0 00:36:54.242 [2024-11-19 21:26:27.919118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.242 [2024-11-19 21:26:27.919162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:54.242 [2024-11-19 21:26:27.932992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1b48 00:36:54.242 [2024-11-19 21:26:27.934525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.242 [2024-11-19 21:26:27.934563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:54.242 [2024-11-19 21:26:27.948303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb8b8 00:36:54.242 [2024-11-19 21:26:27.949812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.242 [2024-11-19 21:26:27.949852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:54.242 [2024-11-19 21:26:27.966357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb328 00:36:54.242 [2024-11-19 21:26:27.968583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.242 [2024-11-19 21:26:27.968623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:54.242 [2024-11-19 21:26:27.977333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6890 00:36:54.242 [2024-11-19 21:26:27.978438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.242 [2024-11-19 21:26:27.978493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:54.242 [2024-11-19 21:26:27.996176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:54.242 [2024-11-19 21:26:27.998166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.242 [2024-11-19 21:26:27.998208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:54.242 [2024-11-19 21:26:28.010543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6300 00:36:54.242 [2024-11-19 21:26:28.012446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.242 [2024-11-19 21:26:28.012485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:54.242 [2024-11-19 21:26:28.026203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be12d8 00:36:54.242 [2024-11-19 21:26:28.027845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.242 [2024-11-19 21:26:28.027884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:54.501 [2024-11-19 21:26:28.042230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf9b30 00:36:54.501 [2024-11-19 21:26:28.043411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.502 [2024-11-19 21:26:28.043451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:54.502 [2024-11-19 21:26:28.056256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd208 00:36:54.502 [2024-11-19 21:26:28.057312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.502 [2024-11-19 21:26:28.057352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:54.502 [2024-11-19 21:26:28.071665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfef90 00:36:54.502 [2024-11-19 21:26:28.073075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.502 [2024-11-19 21:26:28.073115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:54.502 [2024-11-19 21:26:28.090421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be23b8 00:36:54.502 [2024-11-19 21:26:28.092574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.502 [2024-11-19 21:26:28.092627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:54.502 [2024-11-19 21:26:28.105508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6890 00:36:54.502 [2024-11-19 21:26:28.107676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.502 [2024-11-19 21:26:28.107716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:54.502 [2024-11-19 21:26:28.116396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfdeb0 00:36:54.502 [2024-11-19 21:26:28.117687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.502 [2024-11-19 21:26:28.117726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:54.502 [2024-11-19 21:26:28.131658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde8a8 00:36:54.502 [2024-11-19 21:26:28.132943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.502 [2024-11-19 21:26:28.132983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:54.502 [2024-11-19 21:26:28.150420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed920 00:36:54.502 [2024-11-19 21:26:28.152618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.502 [2024-11-19 21:26:28.152685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:54.502 16010.00 IOPS, 62.54 MiB/s [2024-11-19T20:26:28.297Z] [2024-11-19 21:26:28.163807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2510 00:36:54.502 [2024-11-19 21:26:28.165103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.502 [2024-11-19 21:26:28.165142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:54.502 00:36:54.502 Latency(us) 00:36:54.502 [2024-11-19T20:26:28.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:54.502 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:54.502 nvme0n1 : 2.01 16026.70 62.60 0.00 0.00 7970.39 3446.71 21651.15 00:36:54.502 [2024-11-19T20:26:28.297Z] =================================================================================================================== 00:36:54.502 [2024-11-19T20:26:28.297Z] Total : 16026.70 62.60 0.00 0.00 7970.39 3446.71 21651.15 00:36:54.502 { 00:36:54.502 "results": [ 00:36:54.502 { 00:36:54.502 "job": "nvme0n1", 00:36:54.502 "core_mask": "0x2", 00:36:54.502 "workload": "randwrite", 00:36:54.502 "status": "finished", 00:36:54.502 "queue_depth": 128, 00:36:54.502 "io_size": 4096, 00:36:54.502 "runtime": 2.005903, 00:36:54.502 "iops": 16026.697203204742, 00:36:54.502 "mibps": 62.60428595001852, 00:36:54.502 "io_failed": 0, 00:36:54.502 "io_timeout": 0, 00:36:54.502 "avg_latency_us": 7970.38650316361, 00:36:54.502 "min_latency_us": 3446.708148148148, 00:36:54.502 "max_latency_us": 21651.152592592593 00:36:54.502 } 00:36:54.502 ], 00:36:54.502 "core_count": 1 00:36:54.502 } 00:36:54.502 21:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:54.502 21:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:54.502 21:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:54.502 | .driver_specific 00:36:54.502 | .nvme_error 00:36:54.502 | .status_code 00:36:54.502 | .command_transient_transport_error' 00:36:54.502 21:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:54.761 21:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 126 > 0 )) 00:36:54.761 21:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3162753 00:36:54.761 21:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3162753 ']' 00:36:54.761 21:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3162753 00:36:54.761 21:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:54.761 21:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:54.761 21:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3162753 00:36:54.761 21:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:54.761 21:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:54.761 21:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3162753' 00:36:54.761 killing process with pid 3162753 00:36:54.761 21:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3162753 00:36:54.761 Received shutdown signal, test time was about 2.000000 seconds 00:36:54.761 00:36:54.761 Latency(us) 00:36:54.761 [2024-11-19T20:26:28.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:54.761 [2024-11-19T20:26:28.556Z] =================================================================================================================== 00:36:54.761 [2024-11-19T20:26:28.556Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:54.761 21:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3162753 00:36:55.695 21:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:55.695 21:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:55.695 21:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:55.695 21:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:55.695 21:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:55.695 21:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3163298 00:36:55.695 21:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:55.695 21:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3163298 /var/tmp/bperf.sock 00:36:55.695 21:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3163298 ']' 00:36:55.695 21:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:55.695 21:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:55.695 21:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:55.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:55.695 21:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:55.695 21:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:55.695 [2024-11-19 21:26:29.459715] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:36:55.695 [2024-11-19 21:26:29.459845] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3163298 ] 00:36:55.695 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:55.695 Zero copy mechanism will not be used. 00:36:55.953 [2024-11-19 21:26:29.601576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:55.953 [2024-11-19 21:26:29.737936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:56.886 21:26:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:56.886 21:26:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:56.887 21:26:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:56.887 21:26:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:57.144 21:26:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:57.144 21:26:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.144 21:26:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:57.144 21:26:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.144 21:26:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:57.145 21:26:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:57.711 nvme0n1 00:36:57.711 21:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:57.711 21:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.711 21:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:57.711 21:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.711 21:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:57.711 21:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:57.711 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:57.711 Zero copy mechanism will not be used. 00:36:57.711 Running I/O for 2 seconds... 00:36:57.711 [2024-11-19 21:26:31.357213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.711 [2024-11-19 21:26:31.357344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.711 [2024-11-19 21:26:31.357426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.711 [2024-11-19 21:26:31.365310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.711 [2024-11-19 21:26:31.365468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.711 [2024-11-19 21:26:31.365519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.711 [2024-11-19 21:26:31.372768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.711 [2024-11-19 21:26:31.372883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.711 [2024-11-19 21:26:31.372928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.711 [2024-11-19 21:26:31.380178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.711 [2024-11-19 21:26:31.380300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.711 [2024-11-19 21:26:31.380340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.711 [2024-11-19 21:26:31.387499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.712 [2024-11-19 21:26:31.387626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.712 [2024-11-19 21:26:31.387669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.712 [2024-11-19 21:26:31.394856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.712 [2024-11-19 21:26:31.394961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.712 [2024-11-19 21:26:31.395004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.712 [2024-11-19 21:26:31.402191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.712 [2024-11-19 21:26:31.402293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.712 [2024-11-19 21:26:31.402332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.712 [2024-11-19 21:26:31.409291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.712 [2024-11-19 21:26:31.409428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.712 [2024-11-19 21:26:31.409472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.712 [2024-11-19 21:26:31.416566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.712 [2024-11-19 21:26:31.416681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.712 [2024-11-19 21:26:31.416729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.712 [2024-11-19 21:26:31.423814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.712 [2024-11-19 21:26:31.423946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.712 [2024-11-19 21:26:31.423990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.712 [2024-11-19 21:26:31.431124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.712 [2024-11-19 21:26:31.431236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.712 [2024-11-19 21:26:31.431276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.712 [2024-11-19 21:26:31.438292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.712 [2024-11-19 21:26:31.438404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.712 [2024-11-19 21:26:31.438448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.712 [2024-11-19 21:26:31.445532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.712 [2024-11-19 21:26:31.445657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.712 [2024-11-19 21:26:31.445700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.712 [2024-11-19 21:26:31.452629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.712 [2024-11-19 21:26:31.452748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.712 [2024-11-19 21:26:31.452792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.712 [2024-11-19 21:26:31.459800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.712 [2024-11-19 21:26:31.459917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.712 [2024-11-19 21:26:31.459985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.712 [2024-11-19 21:26:31.466806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.712 [2024-11-19 21:26:31.466961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.712 [2024-11-19 21:26:31.467005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.712 [2024-11-19 21:26:31.474572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.712 [2024-11-19 21:26:31.474676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.712 [2024-11-19 21:26:31.474727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.712 [2024-11-19 21:26:31.482165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.712 [2024-11-19 21:26:31.482296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.712 [2024-11-19 21:26:31.482335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.712 [2024-11-19 21:26:31.489194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.712 [2024-11-19 21:26:31.489310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.712 [2024-11-19 21:26:31.489365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.712 [2024-11-19 21:26:31.496484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.712 [2024-11-19 21:26:31.496604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.712 [2024-11-19 21:26:31.496648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.712 [2024-11-19 21:26:31.503749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.712 [2024-11-19 21:26:31.503877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.712 [2024-11-19 21:26:31.503921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.971 [2024-11-19 21:26:31.510864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.971 [2024-11-19 21:26:31.510988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.971 [2024-11-19 21:26:31.511031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.971 [2024-11-19 21:26:31.517902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.971 [2024-11-19 21:26:31.518032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.971 [2024-11-19 21:26:31.518089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.971 [2024-11-19 21:26:31.525082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.971 [2024-11-19 21:26:31.525223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.971 [2024-11-19 21:26:31.525263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.971 [2024-11-19 21:26:31.532251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.971 [2024-11-19 21:26:31.532360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.971 [2024-11-19 21:26:31.532419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.971 [2024-11-19 21:26:31.539491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.971 [2024-11-19 21:26:31.539596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.971 [2024-11-19 21:26:31.539644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.971 [2024-11-19 21:26:31.546615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.971 [2024-11-19 21:26:31.546757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.546801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.553773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.553894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.553941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.560862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.561080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.561137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.567917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.568155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.568198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.574943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.575201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.575243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.582190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.582430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.582474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.590314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.590454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.590502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.597576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.597708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.597756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.604616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.604769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.604812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.611801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.611956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.612006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.619001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.619254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.619298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.626501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.626717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.626764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.634852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.635084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.635141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.641951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.642115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.642155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.648991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.649161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.649200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.656066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.656225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.656265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.663297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.663420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.663471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.671736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.671960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.672004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.679189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.679448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.679496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.686363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.686592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.686640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.693539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.693737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.693785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.700671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.700915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.700959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.707779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.707996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.708039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.714994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.715237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.715276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.722085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.722299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.722338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.729161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.729346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.729404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.972 [2024-11-19 21:26:31.736267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.972 [2024-11-19 21:26:31.736476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.972 [2024-11-19 21:26:31.736518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.973 [2024-11-19 21:26:31.743197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.973 [2024-11-19 21:26:31.743416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.973 [2024-11-19 21:26:31.743460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.973 [2024-11-19 21:26:31.750172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.973 [2024-11-19 21:26:31.750287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.973 [2024-11-19 21:26:31.750331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.973 [2024-11-19 21:26:31.757158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.973 [2024-11-19 21:26:31.757398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.973 [2024-11-19 21:26:31.757441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.232 [2024-11-19 21:26:31.764472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.232 [2024-11-19 21:26:31.764682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.232 [2024-11-19 21:26:31.764727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.232 [2024-11-19 21:26:31.771686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.232 [2024-11-19 21:26:31.771886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.232 [2024-11-19 21:26:31.771943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.232 [2024-11-19 21:26:31.778640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.232 [2024-11-19 21:26:31.778857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.232 [2024-11-19 21:26:31.778900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.232 [2024-11-19 21:26:31.785654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.232 [2024-11-19 21:26:31.785857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.232 [2024-11-19 21:26:31.785900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.232 [2024-11-19 21:26:31.792609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.232 [2024-11-19 21:26:31.792829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.232 [2024-11-19 21:26:31.792872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.232 [2024-11-19 21:26:31.799694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.232 [2024-11-19 21:26:31.799845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.232 [2024-11-19 21:26:31.799894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.232 [2024-11-19 21:26:31.806985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.232 [2024-11-19 21:26:31.807199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.232 [2024-11-19 21:26:31.807238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.232 [2024-11-19 21:26:31.814869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.815114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.815172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.822390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.822612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.822655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.829509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.829650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.829700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.836644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.836790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.836841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.843683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.843823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.843873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.850697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.850887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.850930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.857774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.857988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.858031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.864775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.865022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.865065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.871837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.871988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.872038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.878879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.879082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.879141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.886702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.886918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.886960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.895330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.895451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.895503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.903256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.903350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.903413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.910235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.910335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.910398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.917491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.917614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.917662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.924727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.924831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.924896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.932047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.932194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.932232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.939361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.939502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.939559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.946812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.946930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.946973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.953954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.954076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.954137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.961280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.961413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.961463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.968492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.968616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.968666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.975817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.975937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.975986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.982948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.983060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.983141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.989925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.990034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.990113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:31.997077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:31.997201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:31.997246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.233 [2024-11-19 21:26:32.004687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.233 [2024-11-19 21:26:32.004798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.233 [2024-11-19 21:26:32.004847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.234 [2024-11-19 21:26:32.011969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.234 [2024-11-19 21:26:32.012186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.234 [2024-11-19 21:26:32.012225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.234 [2024-11-19 21:26:32.019450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.234 [2024-11-19 21:26:32.019695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.234 [2024-11-19 21:26:32.019739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.493 [2024-11-19 21:26:32.026408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.493 [2024-11-19 21:26:32.026642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.493 [2024-11-19 21:26:32.026686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.493 [2024-11-19 21:26:32.033465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.493 [2024-11-19 21:26:32.033699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.493 [2024-11-19 21:26:32.033741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.493 [2024-11-19 21:26:32.040796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.493 [2024-11-19 21:26:32.040959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.493 [2024-11-19 21:26:32.040997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.493 [2024-11-19 21:26:32.049107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.493 [2024-11-19 21:26:32.049270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.493 [2024-11-19 21:26:32.049309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.493 [2024-11-19 21:26:32.057766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.493 [2024-11-19 21:26:32.057990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.493 [2024-11-19 21:26:32.058042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.493 [2024-11-19 21:26:32.066413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.493 [2024-11-19 21:26:32.066586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.493 [2024-11-19 21:26:32.066630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.493 [2024-11-19 21:26:32.075309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.493 [2024-11-19 21:26:32.075533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.493 [2024-11-19 21:26:32.075576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.493 [2024-11-19 21:26:32.084029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.493 [2024-11-19 21:26:32.084220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.493 [2024-11-19 21:26:32.084259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.493 [2024-11-19 21:26:32.092822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.493 [2024-11-19 21:26:32.093042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.493 [2024-11-19 21:26:32.093106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.493 [2024-11-19 21:26:32.101262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.493 [2024-11-19 21:26:32.101474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.493 [2024-11-19 21:26:32.101523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.493 [2024-11-19 21:26:32.109779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.493 [2024-11-19 21:26:32.109991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.493 [2024-11-19 21:26:32.110043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.493 [2024-11-19 21:26:32.117770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.493 [2024-11-19 21:26:32.117883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.493 [2024-11-19 21:26:32.117931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.493 [2024-11-19 21:26:32.125448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.493 [2024-11-19 21:26:32.125674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.493 [2024-11-19 21:26:32.125718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.493 [2024-11-19 21:26:32.133027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.493 [2024-11-19 21:26:32.133253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.493 [2024-11-19 21:26:32.133292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.493 [2024-11-19 21:26:32.140733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.493 [2024-11-19 21:26:32.140879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.493 [2024-11-19 21:26:32.140929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.493 [2024-11-19 21:26:32.148184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.493 [2024-11-19 21:26:32.148294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.493 [2024-11-19 21:26:32.148338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.493 [2024-11-19 21:26:32.155942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.493 [2024-11-19 21:26:32.156103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.493 [2024-11-19 21:26:32.156161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.493 [2024-11-19 21:26:32.163535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.493 [2024-11-19 21:26:32.163638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.493 [2024-11-19 21:26:32.163686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.493 [2024-11-19 21:26:32.171785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.493 [2024-11-19 21:26:32.171981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.493 [2024-11-19 21:26:32.172024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.493 [2024-11-19 21:26:32.179914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.493 [2024-11-19 21:26:32.180126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.493 [2024-11-19 21:26:32.180181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.493 [2024-11-19 21:26:32.187313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.494 [2024-11-19 21:26:32.187472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.494 [2024-11-19 21:26:32.187521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.494 [2024-11-19 21:26:32.194578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.494 [2024-11-19 21:26:32.194794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.494 [2024-11-19 21:26:32.194837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.494 [2024-11-19 21:26:32.201897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.494 [2024-11-19 21:26:32.202131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.494 [2024-11-19 21:26:32.202171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.494 [2024-11-19 21:26:32.209160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.494 [2024-11-19 21:26:32.209406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.494 [2024-11-19 21:26:32.209449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.494 [2024-11-19 21:26:32.216521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.494 [2024-11-19 21:26:32.216706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.494 [2024-11-19 21:26:32.216748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.494 [2024-11-19 21:26:32.223707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.494 [2024-11-19 21:26:32.223926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.494 [2024-11-19 21:26:32.223976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.494 [2024-11-19 21:26:32.230725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.494 [2024-11-19 21:26:32.230937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.494 [2024-11-19 21:26:32.230979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.494 [2024-11-19 21:26:32.237774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.494 [2024-11-19 21:26:32.237989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.494 [2024-11-19 21:26:32.238032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.494 [2024-11-19 21:26:32.244909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.494 [2024-11-19 21:26:32.245064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.494 [2024-11-19 21:26:32.245136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.494 [2024-11-19 21:26:32.252303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.494 [2024-11-19 21:26:32.252532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.494 [2024-11-19 21:26:32.252575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.494 [2024-11-19 21:26:32.260696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.494 [2024-11-19 21:26:32.260887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.494 [2024-11-19 21:26:32.260930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.494 [2024-11-19 21:26:32.268275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.494 [2024-11-19 21:26:32.268406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.494 [2024-11-19 21:26:32.268452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.494 [2024-11-19 21:26:32.275645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.494 [2024-11-19 21:26:32.275871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.494 [2024-11-19 21:26:32.275915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.494 [2024-11-19 21:26:32.282772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.494 [2024-11-19 21:26:32.282994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.494 [2024-11-19 21:26:32.283038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.756 [2024-11-19 21:26:32.289921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.756 [2024-11-19 21:26:32.290082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.756 [2024-11-19 21:26:32.290147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.756 [2024-11-19 21:26:32.297527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.756 [2024-11-19 21:26:32.297722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.756 [2024-11-19 21:26:32.297765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.756 [2024-11-19 21:26:32.304933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.756 [2024-11-19 21:26:32.305120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.756 [2024-11-19 21:26:32.305162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.756 [2024-11-19 21:26:32.313093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.756 [2024-11-19 21:26:32.313297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.756 [2024-11-19 21:26:32.313335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.756 [2024-11-19 21:26:32.320514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.756 [2024-11-19 21:26:32.320636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.756 [2024-11-19 21:26:32.320678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.756 [2024-11-19 21:26:32.327771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.756 [2024-11-19 21:26:32.327914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.756 [2024-11-19 21:26:32.327962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.756 [2024-11-19 21:26:32.334980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.756 [2024-11-19 21:26:32.335114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.756 [2024-11-19 21:26:32.335153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.756 [2024-11-19 21:26:32.342681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.756 [2024-11-19 21:26:32.342904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.756 [2024-11-19 21:26:32.342945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.756 [2024-11-19 21:26:32.350291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.756 [2024-11-19 21:26:32.350518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.756 [2024-11-19 21:26:32.350568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.756 [2024-11-19 21:26:32.357406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.756 [2024-11-19 21:26:32.359433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.756 [2024-11-19 21:26:32.359476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.756 4185.00 IOPS, 523.12 MiB/s [2024-11-19T20:26:32.551Z] [2024-11-19 21:26:32.366081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.756 [2024-11-19 21:26:32.366213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.756 [2024-11-19 21:26:32.366258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.756 [2024-11-19 21:26:32.373198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.756 [2024-11-19 21:26:32.373432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.756 [2024-11-19 21:26:32.373474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.756 [2024-11-19 21:26:32.380230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.756 [2024-11-19 21:26:32.380439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.756 [2024-11-19 21:26:32.380480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.387267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.387479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.387532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.394275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.394534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.394575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.401490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.401627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.401677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.408664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.408878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.408935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.415732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.415963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.416005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.422753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.422928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.422971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.429785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.430003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.430045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.436717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.436941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.436984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.443758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.444007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.444048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.451020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.451195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.451233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.458195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.458399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.458441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.466007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.466239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.466277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.473796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.473959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.474007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.480885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.481050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.481118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.487968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.488147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.488184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.495117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.495241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.495285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.502237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.502404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.502452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.509969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.510105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.510167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.517352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.517479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.517521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.524834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.524963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.525011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.532320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.532441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.532483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.539330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.539470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.539518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.757 [2024-11-19 21:26:32.546424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.757 [2024-11-19 21:26:32.546539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.757 [2024-11-19 21:26:32.546581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.016 [2024-11-19 21:26:32.553609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.016 [2024-11-19 21:26:32.553719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.016 [2024-11-19 21:26:32.553767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.016 [2024-11-19 21:26:32.560944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.016 [2024-11-19 21:26:32.561057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.016 [2024-11-19 21:26:32.561119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.016 [2024-11-19 21:26:32.568260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.016 [2024-11-19 21:26:32.568373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.016 [2024-11-19 21:26:32.568415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.016 [2024-11-19 21:26:32.575620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.016 [2024-11-19 21:26:32.575724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.016 [2024-11-19 21:26:32.575765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.016 [2024-11-19 21:26:32.582802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.016 [2024-11-19 21:26:32.582918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.016 [2024-11-19 21:26:32.582971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.016 [2024-11-19 21:26:32.589920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.016 [2024-11-19 21:26:32.590045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.016 [2024-11-19 21:26:32.590115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.016 [2024-11-19 21:26:32.597269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.016 [2024-11-19 21:26:32.597388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.016 [2024-11-19 21:26:32.597430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.016 [2024-11-19 21:26:32.604369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.016 [2024-11-19 21:26:32.604506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.016 [2024-11-19 21:26:32.604552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.016 [2024-11-19 21:26:32.611571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.016 [2024-11-19 21:26:32.611676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.016 [2024-11-19 21:26:32.611718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.016 [2024-11-19 21:26:32.618973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.016 [2024-11-19 21:26:32.619141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.016 [2024-11-19 21:26:32.619178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.016 [2024-11-19 21:26:32.626165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.016 [2024-11-19 21:26:32.626271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.016 [2024-11-19 21:26:32.626309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.016 [2024-11-19 21:26:32.633953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.016 [2024-11-19 21:26:32.634115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.016 [2024-11-19 21:26:32.634163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.016 [2024-11-19 21:26:32.641210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.016 [2024-11-19 21:26:32.641313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.016 [2024-11-19 21:26:32.641361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.017 [2024-11-19 21:26:32.648990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.017 [2024-11-19 21:26:32.649184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.017 [2024-11-19 21:26:32.649222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.017 [2024-11-19 21:26:32.656354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.017 [2024-11-19 21:26:32.656609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.017 [2024-11-19 21:26:32.656650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.017 [2024-11-19 21:26:32.663580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.017 [2024-11-19 21:26:32.663804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.017 [2024-11-19 21:26:32.663859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.017 [2024-11-19 21:26:32.670588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.017 [2024-11-19 21:26:32.670819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.017 [2024-11-19 21:26:32.670861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.017 [2024-11-19 21:26:32.677754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.017 [2024-11-19 21:26:32.677924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.017 [2024-11-19 21:26:32.677966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.017 [2024-11-19 21:26:32.684917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.017 [2024-11-19 21:26:32.685166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.017 [2024-11-19 21:26:32.685205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.017 [2024-11-19 21:26:32.692077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.017 [2024-11-19 21:26:32.692287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.017 [2024-11-19 21:26:32.692325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.017 [2024-11-19 21:26:32.699307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.017 [2024-11-19 21:26:32.699542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.017 [2024-11-19 21:26:32.699583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.017 [2024-11-19 21:26:32.706575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.017 [2024-11-19 21:26:32.706761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.017 [2024-11-19 21:26:32.706803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.017 [2024-11-19 21:26:32.713859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.017 [2024-11-19 21:26:32.714094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.017 [2024-11-19 21:26:32.714151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.017 [2024-11-19 21:26:32.721595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.017 [2024-11-19 21:26:32.721789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.017 [2024-11-19 21:26:32.721830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.017 [2024-11-19 21:26:32.728861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.017 [2024-11-19 21:26:32.729122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.017 [2024-11-19 21:26:32.729162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.017 [2024-11-19 21:26:32.736192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.017 [2024-11-19 21:26:32.736441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.017 [2024-11-19 21:26:32.736484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.017 [2024-11-19 21:26:32.743265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.017 [2024-11-19 21:26:32.743487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.017 [2024-11-19 21:26:32.743530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.017 [2024-11-19 21:26:32.750279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.017 [2024-11-19 21:26:32.750464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.017 [2024-11-19 21:26:32.750508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.017 [2024-11-19 21:26:32.757504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.017 [2024-11-19 21:26:32.757672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.017 [2024-11-19 21:26:32.757716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.017 [2024-11-19 21:26:32.764561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.017 [2024-11-19 21:26:32.764755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.017 [2024-11-19 21:26:32.764800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.017 [2024-11-19 21:26:32.771593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.017 [2024-11-19 21:26:32.771760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.017 [2024-11-19 21:26:32.771810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.017 [2024-11-19 21:26:32.778714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.017 [2024-11-19 21:26:32.778947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.017 [2024-11-19 21:26:32.778991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.017 [2024-11-19 21:26:32.786020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.018 [2024-11-19 21:26:32.786250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.018 [2024-11-19 21:26:32.786297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.018 [2024-11-19 21:26:32.793176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.018 [2024-11-19 21:26:32.793393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.018 [2024-11-19 21:26:32.793437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.018 [2024-11-19 21:26:32.800222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.018 [2024-11-19 21:26:32.800370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.018 [2024-11-19 21:26:32.800434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.018 [2024-11-19 21:26:32.807283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.018 [2024-11-19 21:26:32.807530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.018 [2024-11-19 21:26:32.807573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.277 [2024-11-19 21:26:32.814421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.277 [2024-11-19 21:26:32.814675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.277 [2024-11-19 21:26:32.814719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.277 [2024-11-19 21:26:32.821460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.277 [2024-11-19 21:26:32.821685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.277 [2024-11-19 21:26:32.821728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.277 [2024-11-19 21:26:32.828621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.277 [2024-11-19 21:26:32.828805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.277 [2024-11-19 21:26:32.828848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.277 [2024-11-19 21:26:32.835634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.277 [2024-11-19 21:26:32.835861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.277 [2024-11-19 21:26:32.835904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.277 [2024-11-19 21:26:32.842602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.277 [2024-11-19 21:26:32.842827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.277 [2024-11-19 21:26:32.842870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.849575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.849713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.849764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.856596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.856815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.856859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.863638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.863878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.863921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.870554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.870750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.870809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.877584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.877745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.877795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.884748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.884984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.885028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.891878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.892144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.892185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.898876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.899049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.899118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.905908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.906190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.906230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.912899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.913159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.913199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.919937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.920171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.920211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.926972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.927215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.927255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.934021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.934234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.934279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.941100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.941316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.941356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.948192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.948410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.948453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.955156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.955355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.955413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.962172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.962409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.962452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.969091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.969320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.969386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.976046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.976253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.976294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.983109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.983334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.983389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.990246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.990484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.990528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:32.997184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:32.997421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:32.997466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:33.004108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:33.004335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:33.004389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:33.011175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:33.011402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:33.011446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:33.018035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:33.018260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:33.018300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:33.025048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.278 [2024-11-19 21:26:33.025266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.278 [2024-11-19 21:26:33.025305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.278 [2024-11-19 21:26:33.032155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.279 [2024-11-19 21:26:33.032346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.279 [2024-11-19 21:26:33.032385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.279 [2024-11-19 21:26:33.039157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.279 [2024-11-19 21:26:33.039388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.279 [2024-11-19 21:26:33.039431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.279 [2024-11-19 21:26:33.046252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.279 [2024-11-19 21:26:33.046405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.279 [2024-11-19 21:26:33.046449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.279 [2024-11-19 21:26:33.053236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.279 [2024-11-19 21:26:33.053444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.279 [2024-11-19 21:26:33.053488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.279 [2024-11-19 21:26:33.060177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.279 [2024-11-19 21:26:33.060379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.279 [2024-11-19 21:26:33.060436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.279 [2024-11-19 21:26:33.067103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.279 [2024-11-19 21:26:33.067326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.279 [2024-11-19 21:26:33.067365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.538 [2024-11-19 21:26:33.074132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.538 [2024-11-19 21:26:33.074314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.538 [2024-11-19 21:26:33.074353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.538 [2024-11-19 21:26:33.081245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.538 [2024-11-19 21:26:33.081474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.538 [2024-11-19 21:26:33.081516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.538 [2024-11-19 21:26:33.088359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.538 [2024-11-19 21:26:33.088623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.538 [2024-11-19 21:26:33.088675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.538 [2024-11-19 21:26:33.095826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.538 [2024-11-19 21:26:33.095949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.538 [2024-11-19 21:26:33.095992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.538 [2024-11-19 21:26:33.103813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.538 [2024-11-19 21:26:33.104033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.538 [2024-11-19 21:26:33.104084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.538 [2024-11-19 21:26:33.111085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.538 [2024-11-19 21:26:33.111312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.538 [2024-11-19 21:26:33.111368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.538 [2024-11-19 21:26:33.118226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.538 [2024-11-19 21:26:33.118448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.538 [2024-11-19 21:26:33.118493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.538 [2024-11-19 21:26:33.125505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.538 [2024-11-19 21:26:33.125723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.538 [2024-11-19 21:26:33.125767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.538 [2024-11-19 21:26:33.132706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.538 [2024-11-19 21:26:33.132912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.538 [2024-11-19 21:26:33.132957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.538 [2024-11-19 21:26:33.139746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.538 [2024-11-19 21:26:33.139989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.538 [2024-11-19 21:26:33.140034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.538 [2024-11-19 21:26:33.146718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.538 [2024-11-19 21:26:33.146863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.538 [2024-11-19 21:26:33.146912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.538 [2024-11-19 21:26:33.153906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.538 [2024-11-19 21:26:33.154129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.538 [2024-11-19 21:26:33.154168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.538 [2024-11-19 21:26:33.161179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.538 [2024-11-19 21:26:33.161404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.538 [2024-11-19 21:26:33.161448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.538 [2024-11-19 21:26:33.168275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.538 [2024-11-19 21:26:33.168502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.538 [2024-11-19 21:26:33.168545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.538 [2024-11-19 21:26:33.175473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.538 [2024-11-19 21:26:33.175674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.538 [2024-11-19 21:26:33.175717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.538 [2024-11-19 21:26:33.182558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.538 [2024-11-19 21:26:33.182712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.539 [2024-11-19 21:26:33.182755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.539 [2024-11-19 21:26:33.189936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.539 [2024-11-19 21:26:33.190172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.539 [2024-11-19 21:26:33.190213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.539 [2024-11-19 21:26:33.198313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.539 [2024-11-19 21:26:33.198511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.539 [2024-11-19 21:26:33.198554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.539 [2024-11-19 21:26:33.205704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.539 [2024-11-19 21:26:33.205839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.539 [2024-11-19 21:26:33.205905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.539 [2024-11-19 21:26:33.212689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.539 [2024-11-19 21:26:33.212845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.539 [2024-11-19 21:26:33.212902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.539 [2024-11-19 21:26:33.219948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.539 [2024-11-19 21:26:33.220080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.539 [2024-11-19 21:26:33.220142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.539 [2024-11-19 21:26:33.227496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.539 [2024-11-19 21:26:33.227644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.539 [2024-11-19 21:26:33.227693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.539 [2024-11-19 21:26:33.234488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.539 [2024-11-19 21:26:33.234607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.539 [2024-11-19 21:26:33.234658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.539 [2024-11-19 21:26:33.241595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.539 [2024-11-19 21:26:33.241815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.539 [2024-11-19 21:26:33.241858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.539 [2024-11-19 21:26:33.249580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.539 [2024-11-19 21:26:33.249784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.539 [2024-11-19 21:26:33.249827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.539 [2024-11-19 21:26:33.256793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.539 [2024-11-19 21:26:33.256964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.539 [2024-11-19 21:26:33.257018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.539 [2024-11-19 21:26:33.263924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.539 [2024-11-19 21:26:33.264066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.539 [2024-11-19 21:26:33.264127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.539 [2024-11-19 21:26:33.271090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.539 [2024-11-19 21:26:33.271267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.539 [2024-11-19 21:26:33.271307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.539 [2024-11-19 21:26:33.279473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.539 [2024-11-19 21:26:33.279661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.539 [2024-11-19 21:26:33.279705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.539 [2024-11-19 21:26:33.286602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.539 [2024-11-19 21:26:33.286785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.539 [2024-11-19 21:26:33.286829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.539 [2024-11-19 21:26:33.293719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.539 [2024-11-19 21:26:33.293829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.539 [2024-11-19 21:26:33.293870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.539 [2024-11-19 21:26:33.300792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.539 [2024-11-19 21:26:33.300998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.539 [2024-11-19 21:26:33.301042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.539 [2024-11-19 21:26:33.307938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.539 [2024-11-19 21:26:33.308163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.539 [2024-11-19 21:26:33.308203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.539 [2024-11-19 21:26:33.315122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.539 [2024-11-19 21:26:33.315321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.539 [2024-11-19 21:26:33.315360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.539 [2024-11-19 21:26:33.322264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.539 [2024-11-19 21:26:33.322494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.539 [2024-11-19 21:26:33.322538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.539 [2024-11-19 21:26:33.329538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.539 [2024-11-19 21:26:33.329738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.539 [2024-11-19 21:26:33.329799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.798 [2024-11-19 21:26:33.336667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.798 [2024-11-19 21:26:33.336920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.798 [2024-11-19 21:26:33.336964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:59.798 [2024-11-19 21:26:33.343752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.798 [2024-11-19 21:26:33.343992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.798 [2024-11-19 21:26:33.344036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:59.798 [2024-11-19 21:26:33.350813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.798 [2024-11-19 21:26:33.351032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.798 [2024-11-19 21:26:33.351084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:59.798 [2024-11-19 21:26:33.357830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:59.798 [2024-11-19 21:26:33.357985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.798 [2024-11-19 21:26:33.358035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:59.798 4252.50 IOPS, 531.56 MiB/s 00:36:59.798 Latency(us) 00:36:59.798 [2024-11-19T20:26:33.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:59.798 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:59.798 nvme0n1 : 2.00 4250.91 531.36 0.00 0.00 3752.86 2621.44 9126.49 00:36:59.798 [2024-11-19T20:26:33.593Z] =================================================================================================================== 00:36:59.798 [2024-11-19T20:26:33.593Z] Total : 4250.91 531.36 0.00 0.00 3752.86 2621.44 9126.49 00:36:59.798 { 00:36:59.798 "results": [ 00:36:59.798 { 00:36:59.798 "job": "nvme0n1", 00:36:59.798 "core_mask": "0x2", 00:36:59.798 "workload": "randwrite", 00:36:59.798 "status": "finished", 00:36:59.798 "queue_depth": 16, 00:36:59.798 "io_size": 131072, 00:36:59.798 "runtime": 2.004512, 00:36:59.798 "iops": 4250.909947159209, 00:36:59.798 "mibps": 531.3637433949011, 00:36:59.798 "io_failed": 0, 00:36:59.798 "io_timeout": 0, 00:36:59.798 "avg_latency_us": 3752.859432426206, 00:36:59.798 "min_latency_us": 2621.44, 00:36:59.798 "max_latency_us": 9126.494814814814 00:36:59.798 } 00:36:59.798 ], 00:36:59.798 "core_count": 1 00:36:59.798 } 00:36:59.798 21:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:59.798 21:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:59.798 21:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:59.798 | .driver_specific 00:36:59.798 | .nvme_error 00:36:59.798 | .status_code 00:36:59.798 | .command_transient_transport_error' 00:36:59.798 21:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:00.056 21:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 275 > 0 )) 00:37:00.056 21:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3163298 00:37:00.056 21:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3163298 ']' 00:37:00.056 21:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3163298 00:37:00.056 21:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:00.056 21:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:00.056 21:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3163298 00:37:00.056 21:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:00.056 21:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:00.056 21:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3163298' 00:37:00.056 killing process with pid 3163298 00:37:00.056 21:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3163298 00:37:00.056 Received shutdown signal, test time was about 2.000000 seconds 00:37:00.056 00:37:00.056 Latency(us) 00:37:00.056 [2024-11-19T20:26:33.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:00.056 [2024-11-19T20:26:33.851Z] =================================================================================================================== 00:37:00.056 [2024-11-19T20:26:33.851Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:00.056 21:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3163298 00:37:00.991 21:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3161277 00:37:00.991 21:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3161277 ']' 00:37:00.991 21:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3161277 00:37:00.991 21:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:00.991 21:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:00.991 21:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3161277 00:37:00.991 21:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:00.991 21:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:00.991 21:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3161277' 00:37:00.991 killing process with pid 3161277 00:37:00.991 21:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3161277 00:37:00.991 21:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3161277 00:37:01.926 00:37:01.926 real 0m23.200s 00:37:01.926 user 0m45.462s 00:37:01.926 sys 0m4.693s 00:37:01.926 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:01.926 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:01.926 ************************************ 00:37:01.926 END TEST nvmf_digest_error 00:37:01.926 ************************************ 00:37:01.926 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:01.926 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:01.926 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:01.926 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:37:01.926 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:01.926 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:37:01.926 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:01.926 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:01.926 rmmod nvme_tcp 00:37:01.926 rmmod nvme_fabrics 00:37:01.926 rmmod nvme_keyring 00:37:01.926 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:01.926 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:37:01.926 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:37:01.926 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3161277 ']' 00:37:01.926 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3161277 00:37:01.926 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3161277 ']' 00:37:01.926 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3161277 00:37:01.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3161277) - No such process 00:37:01.926 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3161277 is not found' 00:37:01.926 Process with pid 3161277 is not found 00:37:01.926 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:01.926 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:01.926 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:01.927 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:37:01.927 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:37:01.927 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:01.927 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:37:01.927 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:01.927 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:01.927 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:01.927 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:01.927 21:26:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:04.466 00:37:04.466 real 0m52.291s 00:37:04.466 user 1m34.589s 00:37:04.466 sys 0m10.999s 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:04.466 ************************************ 00:37:04.466 END TEST nvmf_digest 00:37:04.466 ************************************ 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.466 ************************************ 00:37:04.466 START TEST nvmf_bdevperf 00:37:04.466 ************************************ 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:04.466 * Looking for test storage... 00:37:04.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:37:04.466 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:04.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.467 --rc genhtml_branch_coverage=1 00:37:04.467 --rc genhtml_function_coverage=1 00:37:04.467 --rc genhtml_legend=1 00:37:04.467 --rc geninfo_all_blocks=1 00:37:04.467 --rc geninfo_unexecuted_blocks=1 00:37:04.467 00:37:04.467 ' 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:04.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.467 --rc genhtml_branch_coverage=1 00:37:04.467 --rc genhtml_function_coverage=1 00:37:04.467 --rc genhtml_legend=1 00:37:04.467 --rc geninfo_all_blocks=1 00:37:04.467 --rc geninfo_unexecuted_blocks=1 00:37:04.467 00:37:04.467 ' 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:04.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.467 --rc genhtml_branch_coverage=1 00:37:04.467 --rc genhtml_function_coverage=1 00:37:04.467 --rc genhtml_legend=1 00:37:04.467 --rc geninfo_all_blocks=1 00:37:04.467 --rc geninfo_unexecuted_blocks=1 00:37:04.467 00:37:04.467 ' 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:04.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.467 --rc genhtml_branch_coverage=1 00:37:04.467 --rc genhtml_function_coverage=1 00:37:04.467 --rc genhtml_legend=1 00:37:04.467 --rc geninfo_all_blocks=1 00:37:04.467 --rc geninfo_unexecuted_blocks=1 00:37:04.467 00:37:04.467 ' 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:04.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:37:04.467 21:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:06.373 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:06.374 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:06.374 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:06.374 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:06.374 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:06.374 21:26:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:06.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:06.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:37:06.374 00:37:06.374 --- 10.0.0.2 ping statistics --- 00:37:06.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:06.374 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:06.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:06.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:37:06.374 00:37:06.374 --- 10.0.0.1 ping statistics --- 00:37:06.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:06.374 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3165943 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3165943 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3165943 ']' 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:06.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:06.374 21:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:06.633 [2024-11-19 21:26:40.178021] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:37:06.633 [2024-11-19 21:26:40.178172] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:06.633 [2024-11-19 21:26:40.333561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:06.891 [2024-11-19 21:26:40.479034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:06.891 [2024-11-19 21:26:40.479129] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:06.891 [2024-11-19 21:26:40.479156] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:06.891 [2024-11-19 21:26:40.479180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:06.891 [2024-11-19 21:26:40.479200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:06.891 [2024-11-19 21:26:40.481933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:06.891 [2024-11-19 21:26:40.481985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:06.891 [2024-11-19 21:26:40.481992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:07.457 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:07.457 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:07.457 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:07.457 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:07.457 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:07.457 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:07.457 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:07.457 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.457 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:07.457 [2024-11-19 21:26:41.196481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:07.457 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.457 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:07.457 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.457 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:07.715 Malloc0 00:37:07.715 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.715 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:07.715 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.715 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:07.715 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.715 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:07.715 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.715 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:07.715 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.715 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:07.715 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.715 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:07.715 [2024-11-19 21:26:41.312548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:07.715 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.715 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:07.715 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:07.716 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:07.716 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:07.716 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:07.716 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:07.716 { 00:37:07.716 "params": { 00:37:07.716 "name": "Nvme$subsystem", 00:37:07.716 "trtype": "$TEST_TRANSPORT", 00:37:07.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:07.716 "adrfam": "ipv4", 00:37:07.716 "trsvcid": "$NVMF_PORT", 00:37:07.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:07.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:07.716 "hdgst": ${hdgst:-false}, 00:37:07.716 "ddgst": ${ddgst:-false} 00:37:07.716 }, 00:37:07.716 "method": "bdev_nvme_attach_controller" 00:37:07.716 } 00:37:07.716 EOF 00:37:07.716 )") 00:37:07.716 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:07.716 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:07.716 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:07.716 21:26:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:07.716 "params": { 00:37:07.716 "name": "Nvme1", 00:37:07.716 "trtype": "tcp", 00:37:07.716 "traddr": "10.0.0.2", 00:37:07.716 "adrfam": "ipv4", 00:37:07.716 "trsvcid": "4420", 00:37:07.716 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:07.716 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:07.716 "hdgst": false, 00:37:07.716 "ddgst": false 00:37:07.716 }, 00:37:07.716 "method": "bdev_nvme_attach_controller" 00:37:07.716 }' 00:37:07.716 [2024-11-19 21:26:41.398227] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:37:07.716 [2024-11-19 21:26:41.398366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3166196 ] 00:37:07.974 [2024-11-19 21:26:41.539309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.974 [2024-11-19 21:26:41.664928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:08.539 Running I/O for 1 seconds... 00:37:09.472 5970.00 IOPS, 23.32 MiB/s 00:37:09.472 Latency(us) 00:37:09.472 [2024-11-19T20:26:43.267Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:09.472 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:09.472 Verification LBA range: start 0x0 length 0x4000 00:37:09.472 Nvme1n1 : 1.01 6056.10 23.66 0.00 0.00 21010.88 3398.16 18835.53 00:37:09.472 [2024-11-19T20:26:43.267Z] =================================================================================================================== 00:37:09.472 [2024-11-19T20:26:43.267Z] Total : 6056.10 23.66 0.00 0.00 21010.88 3398.16 18835.53 00:37:10.407 21:26:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3166473 00:37:10.407 21:26:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:10.407 21:26:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:10.407 21:26:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:10.407 21:26:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:10.407 21:26:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:10.407 21:26:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:10.407 21:26:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:10.407 { 00:37:10.407 "params": { 00:37:10.407 "name": "Nvme$subsystem", 00:37:10.407 "trtype": "$TEST_TRANSPORT", 00:37:10.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:10.407 "adrfam": "ipv4", 00:37:10.407 "trsvcid": "$NVMF_PORT", 00:37:10.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:10.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:10.407 "hdgst": ${hdgst:-false}, 00:37:10.407 "ddgst": ${ddgst:-false} 00:37:10.407 }, 00:37:10.407 "method": "bdev_nvme_attach_controller" 00:37:10.407 } 00:37:10.407 EOF 00:37:10.407 )") 00:37:10.407 21:26:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:10.407 21:26:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:10.407 21:26:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:10.407 21:26:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:10.407 "params": { 00:37:10.407 "name": "Nvme1", 00:37:10.407 "trtype": "tcp", 00:37:10.407 "traddr": "10.0.0.2", 00:37:10.407 "adrfam": "ipv4", 00:37:10.407 "trsvcid": "4420", 00:37:10.407 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:10.407 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:10.407 "hdgst": false, 00:37:10.407 "ddgst": false 00:37:10.407 }, 00:37:10.407 "method": "bdev_nvme_attach_controller" 00:37:10.407 }' 00:37:10.407 [2024-11-19 21:26:44.101549] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:37:10.407 [2024-11-19 21:26:44.101690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3166473 ] 00:37:10.665 [2024-11-19 21:26:44.237929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:10.666 [2024-11-19 21:26:44.363109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.231 Running I/O for 15 seconds... 00:37:13.543 6158.00 IOPS, 24.05 MiB/s [2024-11-19T20:26:47.338Z] 6124.50 IOPS, 23.92 MiB/s [2024-11-19T20:26:47.338Z] 21:26:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3165943 00:37:13.543 21:26:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:13.543 [2024-11-19 21:26:47.052742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.543 [2024-11-19 21:26:47.052815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.543 [2024-11-19 21:26:47.052864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.543 [2024-11-19 21:26:47.052893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.543 [2024-11-19 21:26:47.052925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.543 [2024-11-19 21:26:47.052953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.543 [2024-11-19 21:26:47.052984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.543 [2024-11-19 21:26:47.053012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.543 [2024-11-19 21:26:47.053041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.543 [2024-11-19 21:26:47.053066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.543 [2024-11-19 21:26:47.053121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.543 [2024-11-19 21:26:47.053146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.543 [2024-11-19 21:26:47.053175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.543 [2024-11-19 21:26:47.053201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.543 [2024-11-19 21:26:47.053230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.543 [2024-11-19 21:26:47.053258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.543 [2024-11-19 21:26:47.053289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.053317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.053350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.053395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.053429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.053464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.053493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.053519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.053546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.053571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.053598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.053623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.053651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.053684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.053711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.053736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.053764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.053790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.053818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.053843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.053871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.053895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.053923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.053947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.053974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.053999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.054026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.054051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.054088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.054115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.054165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.054191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.054219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.054244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.054271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.054296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.054323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:102592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.054347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.054375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.054400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.054427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.054451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.054477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.054502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.054530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.054555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.054582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.054607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.054634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.054659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.054687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.054711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.054739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.054763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.054791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.054821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.054849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.054875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.054902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.544 [2024-11-19 21:26:47.054927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.054955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.544 [2024-11-19 21:26:47.054980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.055007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.544 [2024-11-19 21:26:47.055032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.055060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.544 [2024-11-19 21:26:47.055096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.055135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.544 [2024-11-19 21:26:47.055160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.055187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.544 [2024-11-19 21:26:47.055221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.055249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.544 [2024-11-19 21:26:47.055274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.544 [2024-11-19 21:26:47.055301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.544 [2024-11-19 21:26:47.055326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.055354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.545 [2024-11-19 21:26:47.055379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.055406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.055431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.055459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.055484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.055512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.055542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.055571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.055596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.055624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.055650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.055678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.055703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.055731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.055755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.055783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.055808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.055835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.055860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.055887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.055913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.055940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.055965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.055993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.056018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.056045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.056078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.056109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.056139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.056166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.056191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.056228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.056254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.056281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.056306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.056334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.056360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.056388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.056413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.056441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.056467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.056495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.056520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.056548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.056573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.056601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.056626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.056653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.056678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.056705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.056730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.056759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.056784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.056813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.056838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.056865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.056909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.056940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.056965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.056993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.057018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.545 [2024-11-19 21:26:47.057045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.545 [2024-11-19 21:26:47.057078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.057110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.546 [2024-11-19 21:26:47.057140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.057168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.546 [2024-11-19 21:26:47.057199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.057228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.546 [2024-11-19 21:26:47.057253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.057281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.546 [2024-11-19 21:26:47.057306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.057334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.546 [2024-11-19 21:26:47.057360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.057389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.546 [2024-11-19 21:26:47.057413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.057442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.546 [2024-11-19 21:26:47.057468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.057495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.546 [2024-11-19 21:26:47.057521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.057549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.546 [2024-11-19 21:26:47.057574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.057606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.546 [2024-11-19 21:26:47.057633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.057661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.546 [2024-11-19 21:26:47.057686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.057714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.546 [2024-11-19 21:26:47.057739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.057767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.546 [2024-11-19 21:26:47.057792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.057820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.546 [2024-11-19 21:26:47.057846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.057874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.546 [2024-11-19 21:26:47.057899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.057926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.546 [2024-11-19 21:26:47.057951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.057980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.546 [2024-11-19 21:26:47.058004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.058031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.546 [2024-11-19 21:26:47.058056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.058095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.546 [2024-11-19 21:26:47.058132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.058160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.546 [2024-11-19 21:26:47.058191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.058220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.546 [2024-11-19 21:26:47.058247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.058275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.546 [2024-11-19 21:26:47.058305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.058334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.546 [2024-11-19 21:26:47.058359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.058387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.546 [2024-11-19 21:26:47.058412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.058440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.546 [2024-11-19 21:26:47.058465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.058494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.546 [2024-11-19 21:26:47.058519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.058547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.546 [2024-11-19 21:26:47.058572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.058599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.546 [2024-11-19 21:26:47.058624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.058652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.546 [2024-11-19 21:26:47.058677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.058706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.546 [2024-11-19 21:26:47.058731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.058759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.546 [2024-11-19 21:26:47.058784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.058812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.546 [2024-11-19 21:26:47.058837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.058865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.546 [2024-11-19 21:26:47.058890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.546 [2024-11-19 21:26:47.058918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.546 [2024-11-19 21:26:47.058943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.058970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.547 [2024-11-19 21:26:47.059002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.059033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.547 [2024-11-19 21:26:47.059057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.059096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.547 [2024-11-19 21:26:47.059131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.059159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.547 [2024-11-19 21:26:47.059184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.059218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.547 [2024-11-19 21:26:47.059243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.059272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.547 [2024-11-19 21:26:47.059296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.059324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.547 [2024-11-19 21:26:47.059349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.059379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.547 [2024-11-19 21:26:47.059404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.059431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.547 [2024-11-19 21:26:47.059456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.059484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.547 [2024-11-19 21:26:47.059509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.059538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.547 [2024-11-19 21:26:47.059563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.059590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.547 [2024-11-19 21:26:47.059615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.059643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.547 [2024-11-19 21:26:47.059667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.059699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.547 [2024-11-19 21:26:47.059725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.059751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.547 [2024-11-19 21:26:47.059776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.059804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:13.547 [2024-11-19 21:26:47.059829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.059853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:37:13.547 [2024-11-19 21:26:47.059892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:13.547 [2024-11-19 21:26:47.059913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:13.547 [2024-11-19 21:26:47.059936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102432 len:8 PRP1 0x0 PRP2 0x0 00:37:13.547 [2024-11-19 21:26:47.059960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.060367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:13.547 [2024-11-19 21:26:47.060401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.060429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:13.547 [2024-11-19 21:26:47.060452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.060478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:13.547 [2024-11-19 21:26:47.060502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.060526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:13.547 [2024-11-19 21:26:47.060549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.547 [2024-11-19 21:26:47.060571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.547 [2024-11-19 21:26:47.064691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.547 [2024-11-19 21:26:47.064753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.547 [2024-11-19 21:26:47.065598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.547 [2024-11-19 21:26:47.065656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.547 [2024-11-19 21:26:47.065682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.547 [2024-11-19 21:26:47.065988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.547 [2024-11-19 21:26:47.066299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.547 [2024-11-19 21:26:47.066348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.547 [2024-11-19 21:26:47.066376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.547 [2024-11-19 21:26:47.066402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.547 [2024-11-19 21:26:47.079410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.547 [2024-11-19 21:26:47.079895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.547 [2024-11-19 21:26:47.079937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.547 [2024-11-19 21:26:47.079963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.547 [2024-11-19 21:26:47.080269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.547 [2024-11-19 21:26:47.080553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.547 [2024-11-19 21:26:47.080585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.547 [2024-11-19 21:26:47.080608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.547 [2024-11-19 21:26:47.080630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.547 [2024-11-19 21:26:47.093765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.547 [2024-11-19 21:26:47.094241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.547 [2024-11-19 21:26:47.094282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.547 [2024-11-19 21:26:47.094308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.547 [2024-11-19 21:26:47.094589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.547 [2024-11-19 21:26:47.094872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.548 [2024-11-19 21:26:47.094904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.548 [2024-11-19 21:26:47.094927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.548 [2024-11-19 21:26:47.094948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.548 [2024-11-19 21:26:47.108364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.548 [2024-11-19 21:26:47.108804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.548 [2024-11-19 21:26:47.108847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.548 [2024-11-19 21:26:47.108873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.548 [2024-11-19 21:26:47.109169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.548 [2024-11-19 21:26:47.109454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.548 [2024-11-19 21:26:47.109485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.548 [2024-11-19 21:26:47.109513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.548 [2024-11-19 21:26:47.109536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.548 [2024-11-19 21:26:47.122914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.548 [2024-11-19 21:26:47.123371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.548 [2024-11-19 21:26:47.123413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.548 [2024-11-19 21:26:47.123439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.548 [2024-11-19 21:26:47.123721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.548 [2024-11-19 21:26:47.124002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.548 [2024-11-19 21:26:47.124032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.548 [2024-11-19 21:26:47.124054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.548 [2024-11-19 21:26:47.124087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.548 [2024-11-19 21:26:47.137457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.548 [2024-11-19 21:26:47.137927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.548 [2024-11-19 21:26:47.137969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.548 [2024-11-19 21:26:47.137995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.548 [2024-11-19 21:26:47.138297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.548 [2024-11-19 21:26:47.138579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.548 [2024-11-19 21:26:47.138610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.548 [2024-11-19 21:26:47.138632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.548 [2024-11-19 21:26:47.138654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.548 [2024-11-19 21:26:47.151951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.548 [2024-11-19 21:26:47.152385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.548 [2024-11-19 21:26:47.152426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.548 [2024-11-19 21:26:47.152451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.548 [2024-11-19 21:26:47.152731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.548 [2024-11-19 21:26:47.153032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.548 [2024-11-19 21:26:47.153064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.548 [2024-11-19 21:26:47.153100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.548 [2024-11-19 21:26:47.153123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.548 [2024-11-19 21:26:47.166434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.548 [2024-11-19 21:26:47.166894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.548 [2024-11-19 21:26:47.166935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.548 [2024-11-19 21:26:47.166962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.548 [2024-11-19 21:26:47.167253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.548 [2024-11-19 21:26:47.167535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.548 [2024-11-19 21:26:47.167566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.548 [2024-11-19 21:26:47.167589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.548 [2024-11-19 21:26:47.167610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.548 [2024-11-19 21:26:47.180879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.548 [2024-11-19 21:26:47.181332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.548 [2024-11-19 21:26:47.181373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.548 [2024-11-19 21:26:47.181400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.548 [2024-11-19 21:26:47.181680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.548 [2024-11-19 21:26:47.181963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.548 [2024-11-19 21:26:47.181993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.548 [2024-11-19 21:26:47.182015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.548 [2024-11-19 21:26:47.182036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.548 [2024-11-19 21:26:47.195328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.548 [2024-11-19 21:26:47.195829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.548 [2024-11-19 21:26:47.195888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.548 [2024-11-19 21:26:47.195915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.548 [2024-11-19 21:26:47.196207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.548 [2024-11-19 21:26:47.196489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.548 [2024-11-19 21:26:47.196520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.548 [2024-11-19 21:26:47.196542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.549 [2024-11-19 21:26:47.196563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.549 [2024-11-19 21:26:47.209845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.549 [2024-11-19 21:26:47.210328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.549 [2024-11-19 21:26:47.210375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.549 [2024-11-19 21:26:47.210402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.549 [2024-11-19 21:26:47.210682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.549 [2024-11-19 21:26:47.210962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.549 [2024-11-19 21:26:47.210993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.549 [2024-11-19 21:26:47.211015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.549 [2024-11-19 21:26:47.211037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.549 [2024-11-19 21:26:47.224306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.549 [2024-11-19 21:26:47.224805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.549 [2024-11-19 21:26:47.224865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.549 [2024-11-19 21:26:47.224892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.549 [2024-11-19 21:26:47.225191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.549 [2024-11-19 21:26:47.225473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.549 [2024-11-19 21:26:47.225504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.549 [2024-11-19 21:26:47.225527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.549 [2024-11-19 21:26:47.225548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.549 [2024-11-19 21:26:47.238827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.549 [2024-11-19 21:26:47.239298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.549 [2024-11-19 21:26:47.239340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.549 [2024-11-19 21:26:47.239367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.549 [2024-11-19 21:26:47.239647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.549 [2024-11-19 21:26:47.239928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.549 [2024-11-19 21:26:47.239959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.549 [2024-11-19 21:26:47.239981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.549 [2024-11-19 21:26:47.240003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.549 [2024-11-19 21:26:47.253309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.549 [2024-11-19 21:26:47.253786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.549 [2024-11-19 21:26:47.253827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.549 [2024-11-19 21:26:47.253854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.549 [2024-11-19 21:26:47.254153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.549 [2024-11-19 21:26:47.254434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.549 [2024-11-19 21:26:47.254465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.549 [2024-11-19 21:26:47.254487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.549 [2024-11-19 21:26:47.254509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.549 [2024-11-19 21:26:47.267758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.549 [2024-11-19 21:26:47.268251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.549 [2024-11-19 21:26:47.268308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.549 [2024-11-19 21:26:47.268334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.549 [2024-11-19 21:26:47.268613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.549 [2024-11-19 21:26:47.268893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.549 [2024-11-19 21:26:47.268924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.549 [2024-11-19 21:26:47.268946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.549 [2024-11-19 21:26:47.268984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.549 [2024-11-19 21:26:47.282232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.549 [2024-11-19 21:26:47.282752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.549 [2024-11-19 21:26:47.282812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.549 [2024-11-19 21:26:47.282838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.549 [2024-11-19 21:26:47.283132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.549 [2024-11-19 21:26:47.283413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.549 [2024-11-19 21:26:47.283444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.549 [2024-11-19 21:26:47.283466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.549 [2024-11-19 21:26:47.283487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.549 [2024-11-19 21:26:47.296798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.549 [2024-11-19 21:26:47.297287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.549 [2024-11-19 21:26:47.297347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.549 [2024-11-19 21:26:47.297373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.549 [2024-11-19 21:26:47.297653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.549 [2024-11-19 21:26:47.297933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.549 [2024-11-19 21:26:47.297970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.549 [2024-11-19 21:26:47.297994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.549 [2024-11-19 21:26:47.298016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.549 [2024-11-19 21:26:47.311290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.549 [2024-11-19 21:26:47.311725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.549 [2024-11-19 21:26:47.311766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.549 [2024-11-19 21:26:47.311792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.549 [2024-11-19 21:26:47.312083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.549 [2024-11-19 21:26:47.312364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.549 [2024-11-19 21:26:47.312395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.549 [2024-11-19 21:26:47.312417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.549 [2024-11-19 21:26:47.312439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.549 [2024-11-19 21:26:47.325723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.549 [2024-11-19 21:26:47.326179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.549 [2024-11-19 21:26:47.326220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.549 [2024-11-19 21:26:47.326246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.549 [2024-11-19 21:26:47.326525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.550 [2024-11-19 21:26:47.326805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.550 [2024-11-19 21:26:47.326835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.550 [2024-11-19 21:26:47.326857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.550 [2024-11-19 21:26:47.326878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.809 [2024-11-19 21:26:47.340140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.809 [2024-11-19 21:26:47.340614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.809 [2024-11-19 21:26:47.340673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.809 [2024-11-19 21:26:47.340699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.809 [2024-11-19 21:26:47.340978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.809 [2024-11-19 21:26:47.341273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.809 [2024-11-19 21:26:47.341305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.809 [2024-11-19 21:26:47.341326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.809 [2024-11-19 21:26:47.341354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.809 [2024-11-19 21:26:47.354690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.809 [2024-11-19 21:26:47.355182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.809 [2024-11-19 21:26:47.355224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.809 [2024-11-19 21:26:47.355251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.809 [2024-11-19 21:26:47.355530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.809 [2024-11-19 21:26:47.355813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.809 [2024-11-19 21:26:47.355843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.809 [2024-11-19 21:26:47.355865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.809 [2024-11-19 21:26:47.355887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.809 [2024-11-19 21:26:47.369192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.809 [2024-11-19 21:26:47.369631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.809 [2024-11-19 21:26:47.369672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.809 [2024-11-19 21:26:47.369699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.809 [2024-11-19 21:26:47.369979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.809 [2024-11-19 21:26:47.370272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.809 [2024-11-19 21:26:47.370304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.809 [2024-11-19 21:26:47.370327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.809 [2024-11-19 21:26:47.370348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.809 [2024-11-19 21:26:47.383598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.809 [2024-11-19 21:26:47.384044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.809 [2024-11-19 21:26:47.384094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.809 [2024-11-19 21:26:47.384122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.809 [2024-11-19 21:26:47.384401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.809 [2024-11-19 21:26:47.384681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.809 [2024-11-19 21:26:47.384712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.809 [2024-11-19 21:26:47.384734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.809 [2024-11-19 21:26:47.384756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.809 [2024-11-19 21:26:47.398009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.809 [2024-11-19 21:26:47.398496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.809 [2024-11-19 21:26:47.398537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.809 [2024-11-19 21:26:47.398563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.810 [2024-11-19 21:26:47.398841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.810 [2024-11-19 21:26:47.399137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.810 [2024-11-19 21:26:47.399169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.810 [2024-11-19 21:26:47.399191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.810 [2024-11-19 21:26:47.399214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.810 [2024-11-19 21:26:47.412463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.810 [2024-11-19 21:26:47.412910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.810 [2024-11-19 21:26:47.412951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.810 [2024-11-19 21:26:47.412977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.810 [2024-11-19 21:26:47.413270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.810 [2024-11-19 21:26:47.413551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.810 [2024-11-19 21:26:47.413582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.810 [2024-11-19 21:26:47.413603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.810 [2024-11-19 21:26:47.413624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.810 [2024-11-19 21:26:47.426871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.810 [2024-11-19 21:26:47.427346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.810 [2024-11-19 21:26:47.427387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.810 [2024-11-19 21:26:47.427413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.810 [2024-11-19 21:26:47.427691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.810 [2024-11-19 21:26:47.427972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.810 [2024-11-19 21:26:47.428003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.810 [2024-11-19 21:26:47.428025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.810 [2024-11-19 21:26:47.428046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.810 [2024-11-19 21:26:47.441326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.810 [2024-11-19 21:26:47.441764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.810 [2024-11-19 21:26:47.441805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.810 [2024-11-19 21:26:47.441837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.810 [2024-11-19 21:26:47.442131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.810 [2024-11-19 21:26:47.442412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.810 [2024-11-19 21:26:47.442443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.810 [2024-11-19 21:26:47.442465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.810 [2024-11-19 21:26:47.442487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.810 [2024-11-19 21:26:47.455769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.810 [2024-11-19 21:26:47.456229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.810 [2024-11-19 21:26:47.456271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.810 [2024-11-19 21:26:47.456296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.810 [2024-11-19 21:26:47.456574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.810 [2024-11-19 21:26:47.456854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.810 [2024-11-19 21:26:47.456885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.810 [2024-11-19 21:26:47.456907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.810 [2024-11-19 21:26:47.456928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.810 [2024-11-19 21:26:47.470184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.810 [2024-11-19 21:26:47.470650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.810 [2024-11-19 21:26:47.470692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.810 [2024-11-19 21:26:47.470718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.810 [2024-11-19 21:26:47.470997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.810 [2024-11-19 21:26:47.471292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.810 [2024-11-19 21:26:47.471323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.810 [2024-11-19 21:26:47.471345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.810 [2024-11-19 21:26:47.471366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.810 [2024-11-19 21:26:47.484626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.810 [2024-11-19 21:26:47.485161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.810 [2024-11-19 21:26:47.485215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.810 [2024-11-19 21:26:47.485241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.810 [2024-11-19 21:26:47.485521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.810 [2024-11-19 21:26:47.485808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.810 [2024-11-19 21:26:47.485840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.810 [2024-11-19 21:26:47.485862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.810 [2024-11-19 21:26:47.485884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.810 [2024-11-19 21:26:47.499170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.810 [2024-11-19 21:26:47.499616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.810 [2024-11-19 21:26:47.499657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.810 [2024-11-19 21:26:47.499683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.810 [2024-11-19 21:26:47.499961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.810 [2024-11-19 21:26:47.500256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.810 [2024-11-19 21:26:47.500288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.810 [2024-11-19 21:26:47.500310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.810 [2024-11-19 21:26:47.500332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.810 [2024-11-19 21:26:47.513574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.810 [2024-11-19 21:26:47.514121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.810 [2024-11-19 21:26:47.514164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.810 [2024-11-19 21:26:47.514190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.810 [2024-11-19 21:26:47.514468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.810 [2024-11-19 21:26:47.514750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.811 [2024-11-19 21:26:47.514780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.811 [2024-11-19 21:26:47.514802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.811 [2024-11-19 21:26:47.514824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.811 [2024-11-19 21:26:47.528074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.811 [2024-11-19 21:26:47.528565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.811 [2024-11-19 21:26:47.528625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.811 [2024-11-19 21:26:47.528651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.811 [2024-11-19 21:26:47.528930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.811 [2024-11-19 21:26:47.529224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.811 [2024-11-19 21:26:47.529261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.811 [2024-11-19 21:26:47.529285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.811 [2024-11-19 21:26:47.529307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.811 [2024-11-19 21:26:47.542572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.811 [2024-11-19 21:26:47.543115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.811 [2024-11-19 21:26:47.543156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.811 [2024-11-19 21:26:47.543181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.811 [2024-11-19 21:26:47.543460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.811 [2024-11-19 21:26:47.543741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.811 [2024-11-19 21:26:47.543772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.811 [2024-11-19 21:26:47.543794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.811 [2024-11-19 21:26:47.543815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.811 [2024-11-19 21:26:47.557127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.811 [2024-11-19 21:26:47.557626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.811 [2024-11-19 21:26:47.557668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.811 [2024-11-19 21:26:47.557694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.811 [2024-11-19 21:26:47.557973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.811 [2024-11-19 21:26:47.558268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.811 [2024-11-19 21:26:47.558300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.811 [2024-11-19 21:26:47.558322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.811 [2024-11-19 21:26:47.558343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.811 [2024-11-19 21:26:47.571583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.811 [2024-11-19 21:26:47.572012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.811 [2024-11-19 21:26:47.572088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.811 [2024-11-19 21:26:47.572115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.811 [2024-11-19 21:26:47.572394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.811 [2024-11-19 21:26:47.572675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.811 [2024-11-19 21:26:47.572705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.811 [2024-11-19 21:26:47.572729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.811 [2024-11-19 21:26:47.572759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.811 [2024-11-19 21:26:47.586019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.811 [2024-11-19 21:26:47.586487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.811 [2024-11-19 21:26:47.586530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.811 [2024-11-19 21:26:47.586556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.811 [2024-11-19 21:26:47.586835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.811 [2024-11-19 21:26:47.587128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.811 [2024-11-19 21:26:47.587159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.811 [2024-11-19 21:26:47.587181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.811 [2024-11-19 21:26:47.587204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.811 [2024-11-19 21:26:47.600462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.811 [2024-11-19 21:26:47.600907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.811 [2024-11-19 21:26:47.600948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.811 [2024-11-19 21:26:47.600974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.811 [2024-11-19 21:26:47.601266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.811 [2024-11-19 21:26:47.601547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.811 [2024-11-19 21:26:47.601578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.811 [2024-11-19 21:26:47.601599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.811 [2024-11-19 21:26:47.601621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.071 [2024-11-19 21:26:47.614877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.071 [2024-11-19 21:26:47.615347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.071 [2024-11-19 21:26:47.615389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.071 [2024-11-19 21:26:47.615415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.071 [2024-11-19 21:26:47.615693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.071 [2024-11-19 21:26:47.615974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.071 [2024-11-19 21:26:47.616005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.071 [2024-11-19 21:26:47.616026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.071 [2024-11-19 21:26:47.616047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.071 [2024-11-19 21:26:47.629307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.071 [2024-11-19 21:26:47.629776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.071 [2024-11-19 21:26:47.629817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.071 [2024-11-19 21:26:47.629843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.071 [2024-11-19 21:26:47.630137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.071 [2024-11-19 21:26:47.630418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.071 [2024-11-19 21:26:47.630449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.071 [2024-11-19 21:26:47.630471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.071 [2024-11-19 21:26:47.630493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.071 [2024-11-19 21:26:47.643748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.071 [2024-11-19 21:26:47.644215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.071 [2024-11-19 21:26:47.644256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.071 [2024-11-19 21:26:47.644282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.071 [2024-11-19 21:26:47.644560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.071 [2024-11-19 21:26:47.644840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.071 [2024-11-19 21:26:47.644871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.071 [2024-11-19 21:26:47.644894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.071 [2024-11-19 21:26:47.644916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.071 [2024-11-19 21:26:47.658220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.071 [2024-11-19 21:26:47.658653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.071 [2024-11-19 21:26:47.658695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.071 [2024-11-19 21:26:47.658721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.071 [2024-11-19 21:26:47.659002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.071 [2024-11-19 21:26:47.659296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.071 [2024-11-19 21:26:47.659332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.071 [2024-11-19 21:26:47.659357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.071 [2024-11-19 21:26:47.659381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.071 [2024-11-19 21:26:47.672706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.071 [2024-11-19 21:26:47.673166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.071 [2024-11-19 21:26:47.673208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.071 [2024-11-19 21:26:47.673240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.071 [2024-11-19 21:26:47.673523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.071 [2024-11-19 21:26:47.673805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.071 [2024-11-19 21:26:47.673838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.071 [2024-11-19 21:26:47.673860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.071 [2024-11-19 21:26:47.673881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.071 [2024-11-19 21:26:47.687040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.071 [2024-11-19 21:26:47.687476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.071 [2024-11-19 21:26:47.687518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.071 [2024-11-19 21:26:47.687559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.071 [2024-11-19 21:26:47.687841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.071 [2024-11-19 21:26:47.688139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.071 [2024-11-19 21:26:47.688171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.071 [2024-11-19 21:26:47.688194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.071 [2024-11-19 21:26:47.688215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.071 [2024-11-19 21:26:47.701536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.071 [2024-11-19 21:26:47.701975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.071 [2024-11-19 21:26:47.702016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.071 [2024-11-19 21:26:47.702042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.071 [2024-11-19 21:26:47.702331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.071 [2024-11-19 21:26:47.702614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.071 [2024-11-19 21:26:47.702645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.071 [2024-11-19 21:26:47.702667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.071 [2024-11-19 21:26:47.702689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.071 [2024-11-19 21:26:47.715999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.071 [2024-11-19 21:26:47.716461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.071 [2024-11-19 21:26:47.716501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.071 [2024-11-19 21:26:47.716527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.071 [2024-11-19 21:26:47.716809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.071 [2024-11-19 21:26:47.717110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.071 [2024-11-19 21:26:47.717141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.071 [2024-11-19 21:26:47.717164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.071 [2024-11-19 21:26:47.717186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.071 [2024-11-19 21:26:47.730568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.071 [2024-11-19 21:26:47.730995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.071 [2024-11-19 21:26:47.731036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.071 [2024-11-19 21:26:47.731062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.071 [2024-11-19 21:26:47.731362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.071 [2024-11-19 21:26:47.731645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.071 [2024-11-19 21:26:47.731676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.071 [2024-11-19 21:26:47.731699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.071 [2024-11-19 21:26:47.731720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.071 [2024-11-19 21:26:47.745054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.071 [2024-11-19 21:26:47.745525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.072 [2024-11-19 21:26:47.745566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.072 [2024-11-19 21:26:47.745592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.072 [2024-11-19 21:26:47.745873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.072 [2024-11-19 21:26:47.746170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.072 [2024-11-19 21:26:47.746202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.072 [2024-11-19 21:26:47.746224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.072 [2024-11-19 21:26:47.746246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.072 [2024-11-19 21:26:47.759572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.072 [2024-11-19 21:26:47.760022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.072 [2024-11-19 21:26:47.760063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.072 [2024-11-19 21:26:47.760103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.072 [2024-11-19 21:26:47.760383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.072 [2024-11-19 21:26:47.760665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.072 [2024-11-19 21:26:47.760696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.072 [2024-11-19 21:26:47.760725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.072 [2024-11-19 21:26:47.760748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.072 [2024-11-19 21:26:47.774054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.072 [2024-11-19 21:26:47.774528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.072 [2024-11-19 21:26:47.774570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.072 [2024-11-19 21:26:47.774596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.072 [2024-11-19 21:26:47.774877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.072 [2024-11-19 21:26:47.775178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.072 [2024-11-19 21:26:47.775209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.072 [2024-11-19 21:26:47.775232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.072 [2024-11-19 21:26:47.775254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.072 [2024-11-19 21:26:47.788666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.072 [2024-11-19 21:26:47.789117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.072 [2024-11-19 21:26:47.789160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.072 [2024-11-19 21:26:47.789187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.072 [2024-11-19 21:26:47.789471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.072 [2024-11-19 21:26:47.789755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.072 [2024-11-19 21:26:47.789786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.072 [2024-11-19 21:26:47.789809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.072 [2024-11-19 21:26:47.789832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.072 [2024-11-19 21:26:47.803234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.072 [2024-11-19 21:26:47.803687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.072 [2024-11-19 21:26:47.803728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.072 [2024-11-19 21:26:47.803754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.072 [2024-11-19 21:26:47.804037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.072 [2024-11-19 21:26:47.804335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.072 [2024-11-19 21:26:47.804366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.072 [2024-11-19 21:26:47.804389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.072 [2024-11-19 21:26:47.804411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.072 [2024-11-19 21:26:47.817634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.072 [2024-11-19 21:26:47.818105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.072 [2024-11-19 21:26:47.818147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.072 [2024-11-19 21:26:47.818173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.072 [2024-11-19 21:26:47.818455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.072 [2024-11-19 21:26:47.818739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.072 [2024-11-19 21:26:47.818769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.072 [2024-11-19 21:26:47.818791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.072 [2024-11-19 21:26:47.818813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.072 [2024-11-19 21:26:47.832140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.072 [2024-11-19 21:26:47.832583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.072 [2024-11-19 21:26:47.832625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.072 [2024-11-19 21:26:47.832652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.072 [2024-11-19 21:26:47.832933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.072 [2024-11-19 21:26:47.833238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.072 [2024-11-19 21:26:47.833271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.072 [2024-11-19 21:26:47.833295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.072 [2024-11-19 21:26:47.833317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.072 [2024-11-19 21:26:47.846668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.072 [2024-11-19 21:26:47.847104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.072 [2024-11-19 21:26:47.847146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.072 [2024-11-19 21:26:47.847172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.072 [2024-11-19 21:26:47.847453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.072 [2024-11-19 21:26:47.847736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.072 [2024-11-19 21:26:47.847767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.072 [2024-11-19 21:26:47.847790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.072 [2024-11-19 21:26:47.847812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.072 [2024-11-19 21:26:47.861233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.072 [2024-11-19 21:26:47.861698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.072 [2024-11-19 21:26:47.861744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.072 [2024-11-19 21:26:47.861771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.072 [2024-11-19 21:26:47.862052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.072 [2024-11-19 21:26:47.862347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.072 [2024-11-19 21:26:47.862378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.072 [2024-11-19 21:26:47.862401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.072 [2024-11-19 21:26:47.862423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.332 [2024-11-19 21:26:47.875809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.332 [2024-11-19 21:26:47.876259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.332 [2024-11-19 21:26:47.876300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.332 [2024-11-19 21:26:47.876326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.332 [2024-11-19 21:26:47.876608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.332 [2024-11-19 21:26:47.876891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.332 [2024-11-19 21:26:47.876922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.332 [2024-11-19 21:26:47.876945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.332 [2024-11-19 21:26:47.876966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.332 [2024-11-19 21:26:47.890295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.332 [2024-11-19 21:26:47.890766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.332 [2024-11-19 21:26:47.890808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.332 [2024-11-19 21:26:47.890834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.332 [2024-11-19 21:26:47.891125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.332 [2024-11-19 21:26:47.891422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.332 [2024-11-19 21:26:47.891453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.332 [2024-11-19 21:26:47.891475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.332 [2024-11-19 21:26:47.891497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.332 [2024-11-19 21:26:47.904697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.332 [2024-11-19 21:26:47.905166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.332 [2024-11-19 21:26:47.905214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.332 [2024-11-19 21:26:47.905240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.332 [2024-11-19 21:26:47.905536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.332 [2024-11-19 21:26:47.905821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.332 [2024-11-19 21:26:47.905852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.332 [2024-11-19 21:26:47.905874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.332 [2024-11-19 21:26:47.905895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.332 [2024-11-19 21:26:47.919146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.332 [2024-11-19 21:26:47.919570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.332 [2024-11-19 21:26:47.919610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.332 [2024-11-19 21:26:47.919636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.332 [2024-11-19 21:26:47.919916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.332 [2024-11-19 21:26:47.920218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.332 [2024-11-19 21:26:47.920251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.332 [2024-11-19 21:26:47.920274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.332 [2024-11-19 21:26:47.920295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.332 [2024-11-19 21:26:47.933721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.332 [2024-11-19 21:26:47.934175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.332 [2024-11-19 21:26:47.934220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.332 [2024-11-19 21:26:47.934246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.332 [2024-11-19 21:26:47.934529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.332 [2024-11-19 21:26:47.934811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.332 [2024-11-19 21:26:47.934842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.332 [2024-11-19 21:26:47.934864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.332 [2024-11-19 21:26:47.934886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.332 [2024-11-19 21:26:47.948262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.332 [2024-11-19 21:26:47.948721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.332 [2024-11-19 21:26:47.948761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.332 [2024-11-19 21:26:47.948789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.332 [2024-11-19 21:26:47.949079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.332 [2024-11-19 21:26:47.949364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.332 [2024-11-19 21:26:47.949401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.332 [2024-11-19 21:26:47.949425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.332 [2024-11-19 21:26:47.949447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.332 4251.33 IOPS, 16.61 MiB/s [2024-11-19T20:26:48.127Z] [2024-11-19 21:26:47.962988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.332 [2024-11-19 21:26:47.963439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.332 [2024-11-19 21:26:47.963482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.333 [2024-11-19 21:26:47.963509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.333 [2024-11-19 21:26:47.963790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.333 [2024-11-19 21:26:47.964084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.333 [2024-11-19 21:26:47.964115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.333 [2024-11-19 21:26:47.964138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.333 [2024-11-19 21:26:47.964160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.333 [2024-11-19 21:26:47.977530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.333 [2024-11-19 21:26:47.978009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.333 [2024-11-19 21:26:47.978051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.333 [2024-11-19 21:26:47.978087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.333 [2024-11-19 21:26:47.978373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.333 [2024-11-19 21:26:47.978657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.333 [2024-11-19 21:26:47.978688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.333 [2024-11-19 21:26:47.978710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.333 [2024-11-19 21:26:47.978732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.333 [2024-11-19 21:26:47.992111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.333 [2024-11-19 21:26:47.992553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.333 [2024-11-19 21:26:47.992594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.333 [2024-11-19 21:26:47.992620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.333 [2024-11-19 21:26:47.992901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.333 [2024-11-19 21:26:47.993198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.333 [2024-11-19 21:26:47.993230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.333 [2024-11-19 21:26:47.993259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.333 [2024-11-19 21:26:47.993283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.333 [2024-11-19 21:26:48.006653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.333 [2024-11-19 21:26:48.007098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.333 [2024-11-19 21:26:48.007139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.333 [2024-11-19 21:26:48.007166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.333 [2024-11-19 21:26:48.007449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.333 [2024-11-19 21:26:48.007732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.333 [2024-11-19 21:26:48.007762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.333 [2024-11-19 21:26:48.007785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.333 [2024-11-19 21:26:48.007807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.333 [2024-11-19 21:26:48.021159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.333 [2024-11-19 21:26:48.021595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.333 [2024-11-19 21:26:48.021637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.333 [2024-11-19 21:26:48.021664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.333 [2024-11-19 21:26:48.021945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.333 [2024-11-19 21:26:48.022242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.333 [2024-11-19 21:26:48.022273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.333 [2024-11-19 21:26:48.022296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.333 [2024-11-19 21:26:48.022318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.333 [2024-11-19 21:26:48.035530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.333 [2024-11-19 21:26:48.035988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.333 [2024-11-19 21:26:48.036030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.333 [2024-11-19 21:26:48.036056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.333 [2024-11-19 21:26:48.036349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.333 [2024-11-19 21:26:48.036633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.333 [2024-11-19 21:26:48.036670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.333 [2024-11-19 21:26:48.036692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.333 [2024-11-19 21:26:48.036714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.333 [2024-11-19 21:26:48.049931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.333 [2024-11-19 21:26:48.050424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.333 [2024-11-19 21:26:48.050466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.333 [2024-11-19 21:26:48.050492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.333 [2024-11-19 21:26:48.050776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.333 [2024-11-19 21:26:48.051059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.333 [2024-11-19 21:26:48.051101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.333 [2024-11-19 21:26:48.051125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.333 [2024-11-19 21:26:48.051147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.333 [2024-11-19 21:26:48.064634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.333 [2024-11-19 21:26:48.065099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.333 [2024-11-19 21:26:48.065141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.333 [2024-11-19 21:26:48.065167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.333 [2024-11-19 21:26:48.065448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.333 [2024-11-19 21:26:48.065731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.333 [2024-11-19 21:26:48.065763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.333 [2024-11-19 21:26:48.065786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.333 [2024-11-19 21:26:48.065808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.333 [2024-11-19 21:26:48.079221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.333 [2024-11-19 21:26:48.079673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.333 [2024-11-19 21:26:48.079715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.333 [2024-11-19 21:26:48.079741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.333 [2024-11-19 21:26:48.080021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.333 [2024-11-19 21:26:48.080315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.333 [2024-11-19 21:26:48.080346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.333 [2024-11-19 21:26:48.080369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.333 [2024-11-19 21:26:48.080391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.333 [2024-11-19 21:26:48.093782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.333 [2024-11-19 21:26:48.094253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.333 [2024-11-19 21:26:48.094295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.333 [2024-11-19 21:26:48.094327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.333 [2024-11-19 21:26:48.094618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.333 [2024-11-19 21:26:48.094901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.333 [2024-11-19 21:26:48.094932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.333 [2024-11-19 21:26:48.094998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.333 [2024-11-19 21:26:48.095022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.333 [2024-11-19 21:26:48.108259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.333 [2024-11-19 21:26:48.108684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.334 [2024-11-19 21:26:48.108726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.334 [2024-11-19 21:26:48.108753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.334 [2024-11-19 21:26:48.109034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.334 [2024-11-19 21:26:48.109329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.334 [2024-11-19 21:26:48.109360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.334 [2024-11-19 21:26:48.109382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.334 [2024-11-19 21:26:48.109404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.334 [2024-11-19 21:26:48.122785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.334 [2024-11-19 21:26:48.123233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.334 [2024-11-19 21:26:48.123274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.334 [2024-11-19 21:26:48.123300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.334 [2024-11-19 21:26:48.123582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.334 [2024-11-19 21:26:48.123866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.334 [2024-11-19 21:26:48.123897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.334 [2024-11-19 21:26:48.123920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.334 [2024-11-19 21:26:48.123942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.593 [2024-11-19 21:26:48.137341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.593 [2024-11-19 21:26:48.137791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.593 [2024-11-19 21:26:48.137832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.593 [2024-11-19 21:26:48.137866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.593 [2024-11-19 21:26:48.138164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.593 [2024-11-19 21:26:48.138446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.593 [2024-11-19 21:26:48.138478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.593 [2024-11-19 21:26:48.138501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.593 [2024-11-19 21:26:48.138523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.593 [2024-11-19 21:26:48.151897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.593 [2024-11-19 21:26:48.152356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.593 [2024-11-19 21:26:48.152397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.593 [2024-11-19 21:26:48.152424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.593 [2024-11-19 21:26:48.152707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.593 [2024-11-19 21:26:48.152989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.593 [2024-11-19 21:26:48.153020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.593 [2024-11-19 21:26:48.153043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.593 [2024-11-19 21:26:48.153065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.593 [2024-11-19 21:26:48.166292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.593 [2024-11-19 21:26:48.166757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.593 [2024-11-19 21:26:48.166798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.593 [2024-11-19 21:26:48.166825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.593 [2024-11-19 21:26:48.167120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.593 [2024-11-19 21:26:48.167404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.593 [2024-11-19 21:26:48.167436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.593 [2024-11-19 21:26:48.167458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.593 [2024-11-19 21:26:48.167479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.593 [2024-11-19 21:26:48.180888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.593 [2024-11-19 21:26:48.181351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.593 [2024-11-19 21:26:48.181392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.593 [2024-11-19 21:26:48.181419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.593 [2024-11-19 21:26:48.181701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.593 [2024-11-19 21:26:48.181985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.593 [2024-11-19 21:26:48.182022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.593 [2024-11-19 21:26:48.182046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.593 [2024-11-19 21:26:48.182067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.593 [2024-11-19 21:26:48.195474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.593 [2024-11-19 21:26:48.195923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.594 [2024-11-19 21:26:48.195964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.594 [2024-11-19 21:26:48.195990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.594 [2024-11-19 21:26:48.196282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.594 [2024-11-19 21:26:48.196566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.594 [2024-11-19 21:26:48.196598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.594 [2024-11-19 21:26:48.196621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.594 [2024-11-19 21:26:48.196643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.594 [2024-11-19 21:26:48.210044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.594 [2024-11-19 21:26:48.210498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.594 [2024-11-19 21:26:48.210539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.594 [2024-11-19 21:26:48.210565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.594 [2024-11-19 21:26:48.210846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.594 [2024-11-19 21:26:48.211143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.594 [2024-11-19 21:26:48.211175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.594 [2024-11-19 21:26:48.211197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.594 [2024-11-19 21:26:48.211219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.594 [2024-11-19 21:26:48.224630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.594 [2024-11-19 21:26:48.225092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.594 [2024-11-19 21:26:48.225134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.594 [2024-11-19 21:26:48.225160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.594 [2024-11-19 21:26:48.225441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.594 [2024-11-19 21:26:48.225724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.594 [2024-11-19 21:26:48.225755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.594 [2024-11-19 21:26:48.225778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.594 [2024-11-19 21:26:48.225806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.594 [2024-11-19 21:26:48.239197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.594 [2024-11-19 21:26:48.239661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.594 [2024-11-19 21:26:48.239702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.594 [2024-11-19 21:26:48.239729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.594 [2024-11-19 21:26:48.240011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.594 [2024-11-19 21:26:48.240307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.594 [2024-11-19 21:26:48.240339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.594 [2024-11-19 21:26:48.240362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.594 [2024-11-19 21:26:48.240383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.594 [2024-11-19 21:26:48.253740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.594 [2024-11-19 21:26:48.254216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.594 [2024-11-19 21:26:48.254257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.594 [2024-11-19 21:26:48.254284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.594 [2024-11-19 21:26:48.254565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.594 [2024-11-19 21:26:48.254849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.594 [2024-11-19 21:26:48.254880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.594 [2024-11-19 21:26:48.254904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.594 [2024-11-19 21:26:48.254926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.594 [2024-11-19 21:26:48.268327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.594 [2024-11-19 21:26:48.268786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.594 [2024-11-19 21:26:48.268827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.594 [2024-11-19 21:26:48.268853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.594 [2024-11-19 21:26:48.269144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.594 [2024-11-19 21:26:48.269445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.594 [2024-11-19 21:26:48.269477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.594 [2024-11-19 21:26:48.269501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.594 [2024-11-19 21:26:48.269522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.594 [2024-11-19 21:26:48.282779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.594 [2024-11-19 21:26:48.283264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.594 [2024-11-19 21:26:48.283306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.594 [2024-11-19 21:26:48.283333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.594 [2024-11-19 21:26:48.283618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.594 [2024-11-19 21:26:48.283906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.594 [2024-11-19 21:26:48.283937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.594 [2024-11-19 21:26:48.283959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.594 [2024-11-19 21:26:48.283981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.594 [2024-11-19 21:26:48.297311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.594 [2024-11-19 21:26:48.297763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.594 [2024-11-19 21:26:48.297805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.594 [2024-11-19 21:26:48.297832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.594 [2024-11-19 21:26:48.298136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.594 [2024-11-19 21:26:48.298422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.594 [2024-11-19 21:26:48.298453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.594 [2024-11-19 21:26:48.298476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.594 [2024-11-19 21:26:48.298497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.594 [2024-11-19 21:26:48.311778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.594 [2024-11-19 21:26:48.312200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.594 [2024-11-19 21:26:48.312242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.594 [2024-11-19 21:26:48.312269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.594 [2024-11-19 21:26:48.312552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.594 [2024-11-19 21:26:48.312836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.594 [2024-11-19 21:26:48.312868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.594 [2024-11-19 21:26:48.312890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.594 [2024-11-19 21:26:48.312912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.594 [2024-11-19 21:26:48.326312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.594 [2024-11-19 21:26:48.326779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.594 [2024-11-19 21:26:48.326821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.594 [2024-11-19 21:26:48.326853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.594 [2024-11-19 21:26:48.327148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.594 [2024-11-19 21:26:48.327432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.594 [2024-11-19 21:26:48.327463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.594 [2024-11-19 21:26:48.327487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.595 [2024-11-19 21:26:48.327509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.595 [2024-11-19 21:26:48.340855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.595 [2024-11-19 21:26:48.341266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.595 [2024-11-19 21:26:48.341309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.595 [2024-11-19 21:26:48.341336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.595 [2024-11-19 21:26:48.341618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.595 [2024-11-19 21:26:48.341901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.595 [2024-11-19 21:26:48.341932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.595 [2024-11-19 21:26:48.341954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.595 [2024-11-19 21:26:48.341977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.595 [2024-11-19 21:26:48.355417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.595 [2024-11-19 21:26:48.355857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.595 [2024-11-19 21:26:48.355898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.595 [2024-11-19 21:26:48.355924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.595 [2024-11-19 21:26:48.356218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.595 [2024-11-19 21:26:48.356503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.595 [2024-11-19 21:26:48.356535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.595 [2024-11-19 21:26:48.356557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.595 [2024-11-19 21:26:48.356579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.595 [2024-11-19 21:26:48.369822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.595 [2024-11-19 21:26:48.370275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.595 [2024-11-19 21:26:48.370317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.595 [2024-11-19 21:26:48.370344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.595 [2024-11-19 21:26:48.370626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.595 [2024-11-19 21:26:48.370916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.595 [2024-11-19 21:26:48.370947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.595 [2024-11-19 21:26:48.370969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.595 [2024-11-19 21:26:48.370991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.595 [2024-11-19 21:26:48.384391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.595 [2024-11-19 21:26:48.384841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.595 [2024-11-19 21:26:48.384883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.595 [2024-11-19 21:26:48.384909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.595 [2024-11-19 21:26:48.385209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.595 [2024-11-19 21:26:48.385499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.595 [2024-11-19 21:26:48.385530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.595 [2024-11-19 21:26:48.385553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.595 [2024-11-19 21:26:48.385575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.854 [2024-11-19 21:26:48.398968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.854 [2024-11-19 21:26:48.399428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.854 [2024-11-19 21:26:48.399472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.854 [2024-11-19 21:26:48.399499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.854 [2024-11-19 21:26:48.399780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.854 [2024-11-19 21:26:48.400064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.854 [2024-11-19 21:26:48.400116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.854 [2024-11-19 21:26:48.400140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.854 [2024-11-19 21:26:48.400162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.854 [2024-11-19 21:26:48.413578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.854 [2024-11-19 21:26:48.414011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.854 [2024-11-19 21:26:48.414053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.854 [2024-11-19 21:26:48.414092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.854 [2024-11-19 21:26:48.414377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.854 [2024-11-19 21:26:48.414661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.854 [2024-11-19 21:26:48.414692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.854 [2024-11-19 21:26:48.414720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.854 [2024-11-19 21:26:48.414743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.854 [2024-11-19 21:26:48.428112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.854 [2024-11-19 21:26:48.428515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.854 [2024-11-19 21:26:48.428557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.854 [2024-11-19 21:26:48.428584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.854 [2024-11-19 21:26:48.428867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.854 [2024-11-19 21:26:48.429167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.854 [2024-11-19 21:26:48.429199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.854 [2024-11-19 21:26:48.429222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.854 [2024-11-19 21:26:48.429243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.854 [2024-11-19 21:26:48.442638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.854 [2024-11-19 21:26:48.443110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.854 [2024-11-19 21:26:48.443153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.854 [2024-11-19 21:26:48.443179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.854 [2024-11-19 21:26:48.443461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.854 [2024-11-19 21:26:48.443744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.854 [2024-11-19 21:26:48.443775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.854 [2024-11-19 21:26:48.443797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.854 [2024-11-19 21:26:48.443819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.854 [2024-11-19 21:26:48.457026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.854 [2024-11-19 21:26:48.457470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.854 [2024-11-19 21:26:48.457513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.854 [2024-11-19 21:26:48.457539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.854 [2024-11-19 21:26:48.457822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.854 [2024-11-19 21:26:48.458139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.854 [2024-11-19 21:26:48.458172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.854 [2024-11-19 21:26:48.458196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.855 [2024-11-19 21:26:48.458224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.855 [2024-11-19 21:26:48.471574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.855 [2024-11-19 21:26:48.472008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.855 [2024-11-19 21:26:48.472050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.855 [2024-11-19 21:26:48.472086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.855 [2024-11-19 21:26:48.472370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.855 [2024-11-19 21:26:48.472653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.855 [2024-11-19 21:26:48.472684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.855 [2024-11-19 21:26:48.472707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.855 [2024-11-19 21:26:48.472729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.855 [2024-11-19 21:26:48.486106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.855 [2024-11-19 21:26:48.486573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.855 [2024-11-19 21:26:48.486614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.855 [2024-11-19 21:26:48.486641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.855 [2024-11-19 21:26:48.486923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.855 [2024-11-19 21:26:48.487220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.855 [2024-11-19 21:26:48.487252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.855 [2024-11-19 21:26:48.487274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.855 [2024-11-19 21:26:48.487296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.855 [2024-11-19 21:26:48.500641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.855 [2024-11-19 21:26:48.501096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.855 [2024-11-19 21:26:48.501138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.855 [2024-11-19 21:26:48.501164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.855 [2024-11-19 21:26:48.501446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.855 [2024-11-19 21:26:48.501729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.855 [2024-11-19 21:26:48.501759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.855 [2024-11-19 21:26:48.501781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.855 [2024-11-19 21:26:48.501803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.855 [2024-11-19 21:26:48.515180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.855 [2024-11-19 21:26:48.515634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.855 [2024-11-19 21:26:48.515693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.855 [2024-11-19 21:26:48.515720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.855 [2024-11-19 21:26:48.516003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.855 [2024-11-19 21:26:48.516298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.855 [2024-11-19 21:26:48.516330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.855 [2024-11-19 21:26:48.516352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.855 [2024-11-19 21:26:48.516374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.855 [2024-11-19 21:26:48.529743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.855 [2024-11-19 21:26:48.530209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.855 [2024-11-19 21:26:48.530250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.855 [2024-11-19 21:26:48.530276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.855 [2024-11-19 21:26:48.530558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.855 [2024-11-19 21:26:48.530842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.855 [2024-11-19 21:26:48.530873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.855 [2024-11-19 21:26:48.530896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.855 [2024-11-19 21:26:48.530918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.855 [2024-11-19 21:26:48.544186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.855 [2024-11-19 21:26:48.544642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.855 [2024-11-19 21:26:48.544684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.855 [2024-11-19 21:26:48.544711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.855 [2024-11-19 21:26:48.544995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.855 [2024-11-19 21:26:48.545291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.855 [2024-11-19 21:26:48.545323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.855 [2024-11-19 21:26:48.545346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.855 [2024-11-19 21:26:48.545368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.855 [2024-11-19 21:26:48.558625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.855 [2024-11-19 21:26:48.559090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.855 [2024-11-19 21:26:48.559132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.855 [2024-11-19 21:26:48.559159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.855 [2024-11-19 21:26:48.559451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.855 [2024-11-19 21:26:48.559737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.855 [2024-11-19 21:26:48.559768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.855 [2024-11-19 21:26:48.559790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.855 [2024-11-19 21:26:48.559812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.855 [2024-11-19 21:26:48.573188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.855 [2024-11-19 21:26:48.573652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.855 [2024-11-19 21:26:48.573693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.855 [2024-11-19 21:26:48.573719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.855 [2024-11-19 21:26:48.574000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.855 [2024-11-19 21:26:48.574298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.855 [2024-11-19 21:26:48.574329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.855 [2024-11-19 21:26:48.574352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.855 [2024-11-19 21:26:48.574374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.856 [2024-11-19 21:26:48.587730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.856 [2024-11-19 21:26:48.588253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.856 [2024-11-19 21:26:48.588296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.856 [2024-11-19 21:26:48.588323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.856 [2024-11-19 21:26:48.588605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.856 [2024-11-19 21:26:48.588889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.856 [2024-11-19 21:26:48.588920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.856 [2024-11-19 21:26:48.588943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.856 [2024-11-19 21:26:48.588965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.856 [2024-11-19 21:26:48.602088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.856 [2024-11-19 21:26:48.602550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.856 [2024-11-19 21:26:48.602591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.856 [2024-11-19 21:26:48.602617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.856 [2024-11-19 21:26:48.602900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.856 [2024-11-19 21:26:48.603203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.856 [2024-11-19 21:26:48.603235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.856 [2024-11-19 21:26:48.603258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.856 [2024-11-19 21:26:48.603280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.856 [2024-11-19 21:26:48.616651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.856 [2024-11-19 21:26:48.617080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.856 [2024-11-19 21:26:48.617122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.856 [2024-11-19 21:26:48.617148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.856 [2024-11-19 21:26:48.617430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.856 [2024-11-19 21:26:48.617713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.856 [2024-11-19 21:26:48.617744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.856 [2024-11-19 21:26:48.617767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.856 [2024-11-19 21:26:48.617789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.856 [2024-11-19 21:26:48.631166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.856 [2024-11-19 21:26:48.631604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.856 [2024-11-19 21:26:48.631645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.856 [2024-11-19 21:26:48.631671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.856 [2024-11-19 21:26:48.631953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.856 [2024-11-19 21:26:48.632245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.856 [2024-11-19 21:26:48.632276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.856 [2024-11-19 21:26:48.632299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.856 [2024-11-19 21:26:48.632320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.856 [2024-11-19 21:26:48.645653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.856 [2024-11-19 21:26:48.646109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.856 [2024-11-19 21:26:48.646151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.856 [2024-11-19 21:26:48.646177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.856 [2024-11-19 21:26:48.646457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.856 [2024-11-19 21:26:48.646739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.856 [2024-11-19 21:26:48.646769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.115 [2024-11-19 21:26:48.646798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.115 [2024-11-19 21:26:48.646821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.115 [2024-11-19 21:26:48.660245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.115 [2024-11-19 21:26:48.660697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.115 [2024-11-19 21:26:48.660739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.115 [2024-11-19 21:26:48.660765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.115 [2024-11-19 21:26:48.661048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.115 [2024-11-19 21:26:48.661344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.115 [2024-11-19 21:26:48.661377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.115 [2024-11-19 21:26:48.661400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.115 [2024-11-19 21:26:48.661422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.115 [2024-11-19 21:26:48.674814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.115 [2024-11-19 21:26:48.675285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.115 [2024-11-19 21:26:48.675326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.115 [2024-11-19 21:26:48.675352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.115 [2024-11-19 21:26:48.675634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.115 [2024-11-19 21:26:48.675918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.115 [2024-11-19 21:26:48.675949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.115 [2024-11-19 21:26:48.675972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.115 [2024-11-19 21:26:48.675994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.115 [2024-11-19 21:26:48.689401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.116 [2024-11-19 21:26:48.689879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.116 [2024-11-19 21:26:48.689921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.116 [2024-11-19 21:26:48.689948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.116 [2024-11-19 21:26:48.690244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.116 [2024-11-19 21:26:48.690527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.116 [2024-11-19 21:26:48.690558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.116 [2024-11-19 21:26:48.690580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.116 [2024-11-19 21:26:48.690603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.116 [2024-11-19 21:26:48.703996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.116 [2024-11-19 21:26:48.704472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.116 [2024-11-19 21:26:48.704513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.116 [2024-11-19 21:26:48.704540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.116 [2024-11-19 21:26:48.704823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.116 [2024-11-19 21:26:48.705119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.116 [2024-11-19 21:26:48.705151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.116 [2024-11-19 21:26:48.705174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.116 [2024-11-19 21:26:48.705196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.116 [2024-11-19 21:26:48.718536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.116 [2024-11-19 21:26:48.718995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.116 [2024-11-19 21:26:48.719037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.116 [2024-11-19 21:26:48.719063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.116 [2024-11-19 21:26:48.719375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.116 [2024-11-19 21:26:48.719659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.116 [2024-11-19 21:26:48.719690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.116 [2024-11-19 21:26:48.719712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.116 [2024-11-19 21:26:48.719734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.116 [2024-11-19 21:26:48.733096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.116 [2024-11-19 21:26:48.733523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.116 [2024-11-19 21:26:48.733564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.116 [2024-11-19 21:26:48.733589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.116 [2024-11-19 21:26:48.733871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.116 [2024-11-19 21:26:48.734167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.116 [2024-11-19 21:26:48.734200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.116 [2024-11-19 21:26:48.734223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.116 [2024-11-19 21:26:48.734244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.116 [2024-11-19 21:26:48.747622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.116 [2024-11-19 21:26:48.748087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.116 [2024-11-19 21:26:48.748135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.116 [2024-11-19 21:26:48.748162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.116 [2024-11-19 21:26:48.748443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.116 [2024-11-19 21:26:48.748727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.116 [2024-11-19 21:26:48.748759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.116 [2024-11-19 21:26:48.748781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.116 [2024-11-19 21:26:48.748803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.116 [2024-11-19 21:26:48.762196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.116 [2024-11-19 21:26:48.762653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.116 [2024-11-19 21:26:48.762694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.116 [2024-11-19 21:26:48.762721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.116 [2024-11-19 21:26:48.763003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.116 [2024-11-19 21:26:48.763299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.116 [2024-11-19 21:26:48.763331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.116 [2024-11-19 21:26:48.763354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.116 [2024-11-19 21:26:48.763376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.116 [2024-11-19 21:26:48.776571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.116 [2024-11-19 21:26:48.777051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.116 [2024-11-19 21:26:48.777101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.116 [2024-11-19 21:26:48.777128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.116 [2024-11-19 21:26:48.777410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.116 [2024-11-19 21:26:48.777694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.116 [2024-11-19 21:26:48.777725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.116 [2024-11-19 21:26:48.777747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.116 [2024-11-19 21:26:48.777770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.116 [2024-11-19 21:26:48.790986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.116 [2024-11-19 21:26:48.791472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.116 [2024-11-19 21:26:48.791515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.116 [2024-11-19 21:26:48.791541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.116 [2024-11-19 21:26:48.791830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.116 [2024-11-19 21:26:48.792129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.116 [2024-11-19 21:26:48.792161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.116 [2024-11-19 21:26:48.792184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.116 [2024-11-19 21:26:48.792206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.116 [2024-11-19 21:26:48.805428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.116 [2024-11-19 21:26:48.805885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.116 [2024-11-19 21:26:48.805926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.116 [2024-11-19 21:26:48.805953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.116 [2024-11-19 21:26:48.806251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.116 [2024-11-19 21:26:48.806540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.116 [2024-11-19 21:26:48.806571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.117 [2024-11-19 21:26:48.806594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.117 [2024-11-19 21:26:48.806615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.117 [2024-11-19 21:26:48.819777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.117 [2024-11-19 21:26:48.820247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.117 [2024-11-19 21:26:48.820289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.117 [2024-11-19 21:26:48.820316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.117 [2024-11-19 21:26:48.820598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.117 [2024-11-19 21:26:48.820880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.117 [2024-11-19 21:26:48.820910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.117 [2024-11-19 21:26:48.820932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.117 [2024-11-19 21:26:48.820954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.117 [2024-11-19 21:26:48.834323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.117 [2024-11-19 21:26:48.834789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.117 [2024-11-19 21:26:48.834829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.117 [2024-11-19 21:26:48.834855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.117 [2024-11-19 21:26:48.835148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.117 [2024-11-19 21:26:48.835430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.117 [2024-11-19 21:26:48.835467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.117 [2024-11-19 21:26:48.835490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.117 [2024-11-19 21:26:48.835512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.117 [2024-11-19 21:26:48.848835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.117 [2024-11-19 21:26:48.849284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.117 [2024-11-19 21:26:48.849325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.117 [2024-11-19 21:26:48.849351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.117 [2024-11-19 21:26:48.849632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.117 [2024-11-19 21:26:48.849915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.117 [2024-11-19 21:26:48.849946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.117 [2024-11-19 21:26:48.849969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.117 [2024-11-19 21:26:48.849990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.117 [2024-11-19 21:26:48.863373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.117 [2024-11-19 21:26:48.863810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.117 [2024-11-19 21:26:48.863852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.117 [2024-11-19 21:26:48.863878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.117 [2024-11-19 21:26:48.864174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.117 [2024-11-19 21:26:48.864457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.117 [2024-11-19 21:26:48.864488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.117 [2024-11-19 21:26:48.864511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.117 [2024-11-19 21:26:48.864533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.117 [2024-11-19 21:26:48.877916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.117 [2024-11-19 21:26:48.878375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.117 [2024-11-19 21:26:48.878416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.117 [2024-11-19 21:26:48.878442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.117 [2024-11-19 21:26:48.878721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.117 [2024-11-19 21:26:48.879002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.117 [2024-11-19 21:26:48.879033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.117 [2024-11-19 21:26:48.879056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.117 [2024-11-19 21:26:48.879095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.117 [2024-11-19 21:26:48.892426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.117 [2024-11-19 21:26:48.892876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.117 [2024-11-19 21:26:48.892916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.117 [2024-11-19 21:26:48.892943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.117 [2024-11-19 21:26:48.893237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.117 [2024-11-19 21:26:48.893534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.117 [2024-11-19 21:26:48.893565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.117 [2024-11-19 21:26:48.893588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.117 [2024-11-19 21:26:48.893610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.117 [2024-11-19 21:26:48.906933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.117 [2024-11-19 21:26:48.907410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.117 [2024-11-19 21:26:48.907452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.117 [2024-11-19 21:26:48.907478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.117 [2024-11-19 21:26:48.907760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.376 [2024-11-19 21:26:48.908042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.376 [2024-11-19 21:26:48.908082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.376 [2024-11-19 21:26:48.908108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.376 [2024-11-19 21:26:48.908131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.376 [2024-11-19 21:26:48.921361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.376 [2024-11-19 21:26:48.921809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.376 [2024-11-19 21:26:48.921851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.376 [2024-11-19 21:26:48.921877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.376 [2024-11-19 21:26:48.922181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.376 [2024-11-19 21:26:48.922474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.376 [2024-11-19 21:26:48.922519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.376 [2024-11-19 21:26:48.922542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.376 [2024-11-19 21:26:48.922564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.376 [2024-11-19 21:26:48.935783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.376 [2024-11-19 21:26:48.936253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.376 [2024-11-19 21:26:48.936295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.376 [2024-11-19 21:26:48.936321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.376 [2024-11-19 21:26:48.936602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.376 [2024-11-19 21:26:48.936886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.376 [2024-11-19 21:26:48.936917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.376 [2024-11-19 21:26:48.936939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.376 [2024-11-19 21:26:48.936961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.376 [2024-11-19 21:26:48.950363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.376 [2024-11-19 21:26:48.950795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.376 [2024-11-19 21:26:48.950836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.376 [2024-11-19 21:26:48.950862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.376 [2024-11-19 21:26:48.951153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.376 [2024-11-19 21:26:48.951436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.376 [2024-11-19 21:26:48.951468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.376 [2024-11-19 21:26:48.951491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.376 [2024-11-19 21:26:48.951512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.376 3188.50 IOPS, 12.46 MiB/s [2024-11-19T20:26:49.171Z] [2024-11-19 21:26:48.965567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.376 [2024-11-19 21:26:48.966019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.376 [2024-11-19 21:26:48.966061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.376 [2024-11-19 21:26:48.966099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.376 [2024-11-19 21:26:48.966382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.376 [2024-11-19 21:26:48.966666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.376 [2024-11-19 21:26:48.966697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.376 [2024-11-19 21:26:48.966720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.376 [2024-11-19 21:26:48.966743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.376 [2024-11-19 21:26:48.980096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.376 [2024-11-19 21:26:48.980564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.376 [2024-11-19 21:26:48.980606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.377 [2024-11-19 21:26:48.980638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.377 [2024-11-19 21:26:48.980921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.377 [2024-11-19 21:26:48.981216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.377 [2024-11-19 21:26:48.981248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.377 [2024-11-19 21:26:48.981270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.377 [2024-11-19 21:26:48.981292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.377 [2024-11-19 21:26:48.994452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.377 [2024-11-19 21:26:48.994901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.377 [2024-11-19 21:26:48.994943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.377 [2024-11-19 21:26:48.994969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.377 [2024-11-19 21:26:48.995264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.377 [2024-11-19 21:26:48.995549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.377 [2024-11-19 21:26:48.995580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.377 [2024-11-19 21:26:48.995602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.377 [2024-11-19 21:26:48.995624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.377 [2024-11-19 21:26:49.009007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.377 [2024-11-19 21:26:49.009483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.377 [2024-11-19 21:26:49.009525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.377 [2024-11-19 21:26:49.009551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.377 [2024-11-19 21:26:49.009830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.377 [2024-11-19 21:26:49.010127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.377 [2024-11-19 21:26:49.010160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.377 [2024-11-19 21:26:49.010183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.377 [2024-11-19 21:26:49.010205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.377 [2024-11-19 21:26:49.023579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.377 [2024-11-19 21:26:49.024049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.377 [2024-11-19 21:26:49.024098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.377 [2024-11-19 21:26:49.024125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.377 [2024-11-19 21:26:49.024413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.377 [2024-11-19 21:26:49.024696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.377 [2024-11-19 21:26:49.024726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.377 [2024-11-19 21:26:49.024749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.377 [2024-11-19 21:26:49.024770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.377 [2024-11-19 21:26:49.037993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.377 [2024-11-19 21:26:49.038470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.377 [2024-11-19 21:26:49.038511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.377 [2024-11-19 21:26:49.038538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.377 [2024-11-19 21:26:49.038821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.377 [2024-11-19 21:26:49.039120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.377 [2024-11-19 21:26:49.039152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.377 [2024-11-19 21:26:49.039175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.377 [2024-11-19 21:26:49.039197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.377 [2024-11-19 21:26:49.052432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.377 [2024-11-19 21:26:49.052873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.377 [2024-11-19 21:26:49.052915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.377 [2024-11-19 21:26:49.052941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.377 [2024-11-19 21:26:49.053239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.377 [2024-11-19 21:26:49.053527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.377 [2024-11-19 21:26:49.053565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.377 [2024-11-19 21:26:49.053588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.377 [2024-11-19 21:26:49.053610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.377 [2024-11-19 21:26:49.066845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.377 [2024-11-19 21:26:49.067301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.377 [2024-11-19 21:26:49.067343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.377 [2024-11-19 21:26:49.067369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.377 [2024-11-19 21:26:49.067650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.377 [2024-11-19 21:26:49.067933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.377 [2024-11-19 21:26:49.067970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.377 [2024-11-19 21:26:49.067993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.377 [2024-11-19 21:26:49.068015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.377 [2024-11-19 21:26:49.081406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.377 [2024-11-19 21:26:49.081920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.377 [2024-11-19 21:26:49.081963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.377 [2024-11-19 21:26:49.081989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.377 [2024-11-19 21:26:49.082283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.377 [2024-11-19 21:26:49.082567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.377 [2024-11-19 21:26:49.082598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.377 [2024-11-19 21:26:49.082622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.377 [2024-11-19 21:26:49.082643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.377 [2024-11-19 21:26:49.095743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.377 [2024-11-19 21:26:49.096190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.378 [2024-11-19 21:26:49.096232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.378 [2024-11-19 21:26:49.096258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.378 [2024-11-19 21:26:49.096540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.378 [2024-11-19 21:26:49.096824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.378 [2024-11-19 21:26:49.096856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.378 [2024-11-19 21:26:49.096879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.378 [2024-11-19 21:26:49.096901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.378 [2024-11-19 21:26:49.110276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.378 [2024-11-19 21:26:49.110723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.378 [2024-11-19 21:26:49.110764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.378 [2024-11-19 21:26:49.110791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.378 [2024-11-19 21:26:49.111085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.378 [2024-11-19 21:26:49.111369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.378 [2024-11-19 21:26:49.111400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.378 [2024-11-19 21:26:49.111422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.378 [2024-11-19 21:26:49.111450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.378 [2024-11-19 21:26:49.124798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.378 [2024-11-19 21:26:49.125269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.378 [2024-11-19 21:26:49.125310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.378 [2024-11-19 21:26:49.125337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.378 [2024-11-19 21:26:49.125619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.378 [2024-11-19 21:26:49.125902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.378 [2024-11-19 21:26:49.125932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.378 [2024-11-19 21:26:49.125955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.378 [2024-11-19 21:26:49.125994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.378 [2024-11-19 21:26:49.139360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.378 [2024-11-19 21:26:49.139818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.378 [2024-11-19 21:26:49.139859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.378 [2024-11-19 21:26:49.139886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.378 [2024-11-19 21:26:49.140180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.378 [2024-11-19 21:26:49.140463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.378 [2024-11-19 21:26:49.140494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.378 [2024-11-19 21:26:49.140517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.378 [2024-11-19 21:26:49.140539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.378 [2024-11-19 21:26:49.153892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.378 [2024-11-19 21:26:49.154304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.378 [2024-11-19 21:26:49.154345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.378 [2024-11-19 21:26:49.154371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.378 [2024-11-19 21:26:49.154653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.378 [2024-11-19 21:26:49.154935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.378 [2024-11-19 21:26:49.154966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.378 [2024-11-19 21:26:49.154989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.378 [2024-11-19 21:26:49.155011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.378 [2024-11-19 21:26:49.168387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.637 [2024-11-19 21:26:49.168855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.637 [2024-11-19 21:26:49.168896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.637 [2024-11-19 21:26:49.168924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.637 [2024-11-19 21:26:49.169217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.637 [2024-11-19 21:26:49.169500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.637 [2024-11-19 21:26:49.169531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.637 [2024-11-19 21:26:49.169554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.637 [2024-11-19 21:26:49.169576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.637 [2024-11-19 21:26:49.182924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.637 [2024-11-19 21:26:49.183383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.637 [2024-11-19 21:26:49.183424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.637 [2024-11-19 21:26:49.183451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.637 [2024-11-19 21:26:49.183732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.637 [2024-11-19 21:26:49.184014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.637 [2024-11-19 21:26:49.184045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.637 [2024-11-19 21:26:49.184067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.637 [2024-11-19 21:26:49.184103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.637 [2024-11-19 21:26:49.197441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.637 [2024-11-19 21:26:49.197887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.637 [2024-11-19 21:26:49.197928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.637 [2024-11-19 21:26:49.197954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.637 [2024-11-19 21:26:49.198243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.637 [2024-11-19 21:26:49.198526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.637 [2024-11-19 21:26:49.198557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.637 [2024-11-19 21:26:49.198580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.637 [2024-11-19 21:26:49.198602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.637 [2024-11-19 21:26:49.211907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.637 [2024-11-19 21:26:49.212422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.637 [2024-11-19 21:26:49.212465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.637 [2024-11-19 21:26:49.212498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.637 [2024-11-19 21:26:49.212793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.637 [2024-11-19 21:26:49.213087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.637 [2024-11-19 21:26:49.213118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.638 [2024-11-19 21:26:49.213141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.638 [2024-11-19 21:26:49.213162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.638 [2024-11-19 21:26:49.226510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.638 [2024-11-19 21:26:49.226979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.638 [2024-11-19 21:26:49.227020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.638 [2024-11-19 21:26:49.227047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.638 [2024-11-19 21:26:49.227337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.638 [2024-11-19 21:26:49.227620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.638 [2024-11-19 21:26:49.227661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.638 [2024-11-19 21:26:49.227684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.638 [2024-11-19 21:26:49.227705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.638 [2024-11-19 21:26:49.240883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.638 [2024-11-19 21:26:49.241408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.638 [2024-11-19 21:26:49.241466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.638 [2024-11-19 21:26:49.241492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.638 [2024-11-19 21:26:49.241772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.638 [2024-11-19 21:26:49.242054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.638 [2024-11-19 21:26:49.242098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.638 [2024-11-19 21:26:49.242121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.638 [2024-11-19 21:26:49.242143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.638 [2024-11-19 21:26:49.255306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.638 [2024-11-19 21:26:49.255743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.638 [2024-11-19 21:26:49.255784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.638 [2024-11-19 21:26:49.255811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.638 [2024-11-19 21:26:49.256104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.638 [2024-11-19 21:26:49.256395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.638 [2024-11-19 21:26:49.256426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.638 [2024-11-19 21:26:49.256448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.638 [2024-11-19 21:26:49.256470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.638 [2024-11-19 21:26:49.269893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.638 [2024-11-19 21:26:49.270326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.638 [2024-11-19 21:26:49.270367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.638 [2024-11-19 21:26:49.270393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.638 [2024-11-19 21:26:49.270673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.638 [2024-11-19 21:26:49.270956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.638 [2024-11-19 21:26:49.270987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.638 [2024-11-19 21:26:49.271010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.638 [2024-11-19 21:26:49.271032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.638 [2024-11-19 21:26:49.284472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.638 [2024-11-19 21:26:49.284976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.638 [2024-11-19 21:26:49.285035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.638 [2024-11-19 21:26:49.285061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.638 [2024-11-19 21:26:49.285353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.638 [2024-11-19 21:26:49.285636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.638 [2024-11-19 21:26:49.285667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.638 [2024-11-19 21:26:49.285690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.638 [2024-11-19 21:26:49.285712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.638 [2024-11-19 21:26:49.298896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.638 [2024-11-19 21:26:49.299347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.638 [2024-11-19 21:26:49.299388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.638 [2024-11-19 21:26:49.299414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.638 [2024-11-19 21:26:49.299695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.638 [2024-11-19 21:26:49.299978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.638 [2024-11-19 21:26:49.300009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.638 [2024-11-19 21:26:49.300038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.638 [2024-11-19 21:26:49.300062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.638 [2024-11-19 21:26:49.313318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.638 [2024-11-19 21:26:49.313779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.638 [2024-11-19 21:26:49.313821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.638 [2024-11-19 21:26:49.313847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.638 [2024-11-19 21:26:49.314144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.638 [2024-11-19 21:26:49.314426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.638 [2024-11-19 21:26:49.314456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.638 [2024-11-19 21:26:49.314479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.638 [2024-11-19 21:26:49.314501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.638 [2024-11-19 21:26:49.327686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.638 [2024-11-19 21:26:49.328163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.638 [2024-11-19 21:26:49.328205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.638 [2024-11-19 21:26:49.328232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.638 [2024-11-19 21:26:49.328513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.638 [2024-11-19 21:26:49.328797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.638 [2024-11-19 21:26:49.328828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.638 [2024-11-19 21:26:49.328851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.638 [2024-11-19 21:26:49.328872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.638 [2024-11-19 21:26:49.342286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.638 [2024-11-19 21:26:49.342726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.638 [2024-11-19 21:26:49.342768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.638 [2024-11-19 21:26:49.342794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.638 [2024-11-19 21:26:49.343086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.638 [2024-11-19 21:26:49.343369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.638 [2024-11-19 21:26:49.343401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.638 [2024-11-19 21:26:49.343423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.638 [2024-11-19 21:26:49.343445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.639 [2024-11-19 21:26:49.356798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.639 [2024-11-19 21:26:49.357289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.639 [2024-11-19 21:26:49.357331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.639 [2024-11-19 21:26:49.357358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.639 [2024-11-19 21:26:49.357638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.639 [2024-11-19 21:26:49.357919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.639 [2024-11-19 21:26:49.357950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.639 [2024-11-19 21:26:49.357973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.639 [2024-11-19 21:26:49.357995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.639 [2024-11-19 21:26:49.371367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.639 [2024-11-19 21:26:49.371827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.639 [2024-11-19 21:26:49.371868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.639 [2024-11-19 21:26:49.371894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.639 [2024-11-19 21:26:49.372186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.639 [2024-11-19 21:26:49.372474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.639 [2024-11-19 21:26:49.372505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.639 [2024-11-19 21:26:49.372529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.639 [2024-11-19 21:26:49.372550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.639 [2024-11-19 21:26:49.385862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.639 [2024-11-19 21:26:49.386284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.639 [2024-11-19 21:26:49.386326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.639 [2024-11-19 21:26:49.386353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.639 [2024-11-19 21:26:49.386633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.639 [2024-11-19 21:26:49.386915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.639 [2024-11-19 21:26:49.386946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.639 [2024-11-19 21:26:49.386968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.639 [2024-11-19 21:26:49.386990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.639 [2024-11-19 21:26:49.400287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.639 [2024-11-19 21:26:49.400735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.639 [2024-11-19 21:26:49.400783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.639 [2024-11-19 21:26:49.400810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.639 [2024-11-19 21:26:49.401102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.639 [2024-11-19 21:26:49.401383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.639 [2024-11-19 21:26:49.401414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.639 [2024-11-19 21:26:49.401437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.639 [2024-11-19 21:26:49.401459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.639 [2024-11-19 21:26:49.414729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.639 [2024-11-19 21:26:49.415205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.639 [2024-11-19 21:26:49.415246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.639 [2024-11-19 21:26:49.415273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.639 [2024-11-19 21:26:49.415552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.639 [2024-11-19 21:26:49.415836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.639 [2024-11-19 21:26:49.415867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.639 [2024-11-19 21:26:49.415890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.639 [2024-11-19 21:26:49.415911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.639 [2024-11-19 21:26:49.429215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.639 [2024-11-19 21:26:49.429667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.639 [2024-11-19 21:26:49.429708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.639 [2024-11-19 21:26:49.429735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.898 [2024-11-19 21:26:49.430017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.898 [2024-11-19 21:26:49.430308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.898 [2024-11-19 21:26:49.430340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.898 [2024-11-19 21:26:49.430364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.898 [2024-11-19 21:26:49.430386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.898 [2024-11-19 21:26:49.443692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.898 [2024-11-19 21:26:49.444156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-11-19 21:26:49.444198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.898 [2024-11-19 21:26:49.444225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.898 [2024-11-19 21:26:49.444511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.898 [2024-11-19 21:26:49.444792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.898 [2024-11-19 21:26:49.444823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.898 [2024-11-19 21:26:49.444846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.898 [2024-11-19 21:26:49.444868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.898 [2024-11-19 21:26:49.458159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.898 [2024-11-19 21:26:49.458603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-11-19 21:26:49.458645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.898 [2024-11-19 21:26:49.458672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.898 [2024-11-19 21:26:49.458952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.898 [2024-11-19 21:26:49.459246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.898 [2024-11-19 21:26:49.459278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.898 [2024-11-19 21:26:49.459300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.898 [2024-11-19 21:26:49.459322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.898 [2024-11-19 21:26:49.472715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.898 [2024-11-19 21:26:49.473185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-11-19 21:26:49.473228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.898 [2024-11-19 21:26:49.473254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.898 [2024-11-19 21:26:49.473544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.898 [2024-11-19 21:26:49.473826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.898 [2024-11-19 21:26:49.473857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.898 [2024-11-19 21:26:49.473879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.898 [2024-11-19 21:26:49.473902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.898 [2024-11-19 21:26:49.487188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.898 [2024-11-19 21:26:49.487646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-11-19 21:26:49.487687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.898 [2024-11-19 21:26:49.487714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.898 [2024-11-19 21:26:49.487994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.898 [2024-11-19 21:26:49.488297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.898 [2024-11-19 21:26:49.488329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.898 [2024-11-19 21:26:49.488352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.898 [2024-11-19 21:26:49.488374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.899 [2024-11-19 21:26:49.501633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.899 [2024-11-19 21:26:49.502074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-11-19 21:26:49.502123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.899 [2024-11-19 21:26:49.502149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.899 [2024-11-19 21:26:49.502429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.899 [2024-11-19 21:26:49.502711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.899 [2024-11-19 21:26:49.502742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.899 [2024-11-19 21:26:49.502765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.899 [2024-11-19 21:26:49.502787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.899 [2024-11-19 21:26:49.516027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.899 [2024-11-19 21:26:49.516490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-11-19 21:26:49.516538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.899 [2024-11-19 21:26:49.516565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.899 [2024-11-19 21:26:49.516845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.899 [2024-11-19 21:26:49.517140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.899 [2024-11-19 21:26:49.517182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.899 [2024-11-19 21:26:49.517204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.899 [2024-11-19 21:26:49.517227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.899 [2024-11-19 21:26:49.530501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.899 [2024-11-19 21:26:49.530949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-11-19 21:26:49.530990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.899 [2024-11-19 21:26:49.531016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.899 [2024-11-19 21:26:49.531317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.899 [2024-11-19 21:26:49.531598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.899 [2024-11-19 21:26:49.531629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.899 [2024-11-19 21:26:49.531658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.899 [2024-11-19 21:26:49.531681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.899 [2024-11-19 21:26:49.544954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.899 [2024-11-19 21:26:49.545423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-11-19 21:26:49.545464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.899 [2024-11-19 21:26:49.545506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.899 [2024-11-19 21:26:49.545786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.899 [2024-11-19 21:26:49.546078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.899 [2024-11-19 21:26:49.546109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.899 [2024-11-19 21:26:49.546132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.899 [2024-11-19 21:26:49.546154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.899 [2024-11-19 21:26:49.559433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.899 [2024-11-19 21:26:49.559993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-11-19 21:26:49.560050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.899 [2024-11-19 21:26:49.560087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.899 [2024-11-19 21:26:49.560369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.899 [2024-11-19 21:26:49.560650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.899 [2024-11-19 21:26:49.560680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.899 [2024-11-19 21:26:49.560703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.899 [2024-11-19 21:26:49.560725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.899 [2024-11-19 21:26:49.573788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.899 [2024-11-19 21:26:49.574245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-11-19 21:26:49.574287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.899 [2024-11-19 21:26:49.574314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.899 [2024-11-19 21:26:49.574593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.899 [2024-11-19 21:26:49.574874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.899 [2024-11-19 21:26:49.574904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.899 [2024-11-19 21:26:49.574927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.899 [2024-11-19 21:26:49.574949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.899 [2024-11-19 21:26:49.588221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.899 [2024-11-19 21:26:49.588669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-11-19 21:26:49.588710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.899 [2024-11-19 21:26:49.588736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.899 [2024-11-19 21:26:49.589016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.899 [2024-11-19 21:26:49.589309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.899 [2024-11-19 21:26:49.589341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.899 [2024-11-19 21:26:49.589363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.899 [2024-11-19 21:26:49.589384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.899 [2024-11-19 21:26:49.602679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.899 [2024-11-19 21:26:49.603135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-11-19 21:26:49.603177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.899 [2024-11-19 21:26:49.603204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.899 [2024-11-19 21:26:49.603484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.899 [2024-11-19 21:26:49.603766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.899 [2024-11-19 21:26:49.603797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.899 [2024-11-19 21:26:49.603820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.899 [2024-11-19 21:26:49.603841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.899 [2024-11-19 21:26:49.617116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.899 [2024-11-19 21:26:49.617570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-11-19 21:26:49.617611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.900 [2024-11-19 21:26:49.617638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.900 [2024-11-19 21:26:49.617918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.900 [2024-11-19 21:26:49.618214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.900 [2024-11-19 21:26:49.618247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.900 [2024-11-19 21:26:49.618269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.900 [2024-11-19 21:26:49.618290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.900 [2024-11-19 21:26:49.631563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.900 [2024-11-19 21:26:49.632043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-11-19 21:26:49.632101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.900 [2024-11-19 21:26:49.632129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.900 [2024-11-19 21:26:49.632408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.900 [2024-11-19 21:26:49.632688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.900 [2024-11-19 21:26:49.632719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.900 [2024-11-19 21:26:49.632742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.900 [2024-11-19 21:26:49.632764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.900 [2024-11-19 21:26:49.646022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.900 [2024-11-19 21:26:49.646454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-11-19 21:26:49.646495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.900 [2024-11-19 21:26:49.646521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.900 [2024-11-19 21:26:49.646800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.900 [2024-11-19 21:26:49.647093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.900 [2024-11-19 21:26:49.647124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.900 [2024-11-19 21:26:49.647146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.900 [2024-11-19 21:26:49.647168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.900 [2024-11-19 21:26:49.660411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.900 [2024-11-19 21:26:49.660833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-11-19 21:26:49.660875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.900 [2024-11-19 21:26:49.660901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.900 [2024-11-19 21:26:49.661195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.900 [2024-11-19 21:26:49.661476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.900 [2024-11-19 21:26:49.661507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.900 [2024-11-19 21:26:49.661529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.900 [2024-11-19 21:26:49.661551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.900 [2024-11-19 21:26:49.674833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.900 [2024-11-19 21:26:49.675302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-11-19 21:26:49.675344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.900 [2024-11-19 21:26:49.675370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.900 [2024-11-19 21:26:49.675655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.900 [2024-11-19 21:26:49.675937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.900 [2024-11-19 21:26:49.675968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.900 [2024-11-19 21:26:49.675990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.900 [2024-11-19 21:26:49.676011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.900 [2024-11-19 21:26:49.689298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.900 [2024-11-19 21:26:49.689750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-11-19 21:26:49.689792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.900 [2024-11-19 21:26:49.689819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.900 [2024-11-19 21:26:49.690112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.900 [2024-11-19 21:26:49.690395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.900 [2024-11-19 21:26:49.690426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.900 [2024-11-19 21:26:49.690449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.900 [2024-11-19 21:26:49.690471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.159 [2024-11-19 21:26:49.703701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.159 [2024-11-19 21:26:49.704165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.159 [2024-11-19 21:26:49.704207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.159 [2024-11-19 21:26:49.704233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.159 [2024-11-19 21:26:49.704514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.159 [2024-11-19 21:26:49.704795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.159 [2024-11-19 21:26:49.704826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.159 [2024-11-19 21:26:49.704848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.159 [2024-11-19 21:26:49.704870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.159 [2024-11-19 21:26:49.718127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.159 [2024-11-19 21:26:49.718588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.159 [2024-11-19 21:26:49.718630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.159 [2024-11-19 21:26:49.718656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.159 [2024-11-19 21:26:49.718935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.160 [2024-11-19 21:26:49.719231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.160 [2024-11-19 21:26:49.719268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.160 [2024-11-19 21:26:49.719292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.160 [2024-11-19 21:26:49.719314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.160 [2024-11-19 21:26:49.732632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.160 [2024-11-19 21:26:49.733096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.160 [2024-11-19 21:26:49.733139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.160 [2024-11-19 21:26:49.733165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.160 [2024-11-19 21:26:49.733447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.160 [2024-11-19 21:26:49.733728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.160 [2024-11-19 21:26:49.733760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.160 [2024-11-19 21:26:49.733782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.160 [2024-11-19 21:26:49.733803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.160 [2024-11-19 21:26:49.747078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.160 [2024-11-19 21:26:49.747525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.160 [2024-11-19 21:26:49.747567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.160 [2024-11-19 21:26:49.747593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.160 [2024-11-19 21:26:49.747873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.160 [2024-11-19 21:26:49.748184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.160 [2024-11-19 21:26:49.748215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.160 [2024-11-19 21:26:49.748238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.160 [2024-11-19 21:26:49.748260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.160 [2024-11-19 21:26:49.761523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.160 [2024-11-19 21:26:49.761976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.160 [2024-11-19 21:26:49.762017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.160 [2024-11-19 21:26:49.762042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.160 [2024-11-19 21:26:49.762331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.160 [2024-11-19 21:26:49.762611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.160 [2024-11-19 21:26:49.762643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.160 [2024-11-19 21:26:49.762665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.160 [2024-11-19 21:26:49.762693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.160 [2024-11-19 21:26:49.775958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.160 [2024-11-19 21:26:49.776398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.160 [2024-11-19 21:26:49.776439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.160 [2024-11-19 21:26:49.776465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.160 [2024-11-19 21:26:49.776744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.160 [2024-11-19 21:26:49.777025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.160 [2024-11-19 21:26:49.777055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.160 [2024-11-19 21:26:49.777092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.160 [2024-11-19 21:26:49.777116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.160 [2024-11-19 21:26:49.790375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.160 [2024-11-19 21:26:49.790838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.160 [2024-11-19 21:26:49.790879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.160 [2024-11-19 21:26:49.790905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.160 [2024-11-19 21:26:49.791200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.160 [2024-11-19 21:26:49.791482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.160 [2024-11-19 21:26:49.791513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.160 [2024-11-19 21:26:49.791535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.160 [2024-11-19 21:26:49.791557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.160 [2024-11-19 21:26:49.804824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.160 [2024-11-19 21:26:49.805297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.160 [2024-11-19 21:26:49.805338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.160 [2024-11-19 21:26:49.805364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.160 [2024-11-19 21:26:49.805644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.160 [2024-11-19 21:26:49.805925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.160 [2024-11-19 21:26:49.805955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.160 [2024-11-19 21:26:49.805977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.160 [2024-11-19 21:26:49.805999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.160 [2024-11-19 21:26:49.819261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.160 [2024-11-19 21:26:49.819684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.160 [2024-11-19 21:26:49.819725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.160 [2024-11-19 21:26:49.819751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.160 [2024-11-19 21:26:49.820029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.160 [2024-11-19 21:26:49.820321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.160 [2024-11-19 21:26:49.820352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.160 [2024-11-19 21:26:49.820374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.160 [2024-11-19 21:26:49.820396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.160 [2024-11-19 21:26:49.833640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.160 [2024-11-19 21:26:49.834136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.160 [2024-11-19 21:26:49.834179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.160 [2024-11-19 21:26:49.834205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.160 [2024-11-19 21:26:49.834485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.160 [2024-11-19 21:26:49.834766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.160 [2024-11-19 21:26:49.834797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.160 [2024-11-19 21:26:49.834820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.161 [2024-11-19 21:26:49.834841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.161 [2024-11-19 21:26:49.848099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.161 [2024-11-19 21:26:49.848540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.161 [2024-11-19 21:26:49.848580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.161 [2024-11-19 21:26:49.848606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.161 [2024-11-19 21:26:49.848884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.161 [2024-11-19 21:26:49.849178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.161 [2024-11-19 21:26:49.849210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.161 [2024-11-19 21:26:49.849232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.161 [2024-11-19 21:26:49.849254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.161 [2024-11-19 21:26:49.862520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.161 [2024-11-19 21:26:49.862984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.161 [2024-11-19 21:26:49.863025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.161 [2024-11-19 21:26:49.863057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.161 [2024-11-19 21:26:49.863351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.161 [2024-11-19 21:26:49.863634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.161 [2024-11-19 21:26:49.863664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.161 [2024-11-19 21:26:49.863686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.161 [2024-11-19 21:26:49.863708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.161 [2024-11-19 21:26:49.876992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.161 [2024-11-19 21:26:49.877422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.161 [2024-11-19 21:26:49.877463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.161 [2024-11-19 21:26:49.877489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.161 [2024-11-19 21:26:49.877769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.161 [2024-11-19 21:26:49.878052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.161 [2024-11-19 21:26:49.878095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.161 [2024-11-19 21:26:49.878120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.161 [2024-11-19 21:26:49.878142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.161 [2024-11-19 21:26:49.891401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.161 [2024-11-19 21:26:49.891845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.161 [2024-11-19 21:26:49.891887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.161 [2024-11-19 21:26:49.891914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.161 [2024-11-19 21:26:49.892206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.161 [2024-11-19 21:26:49.892487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.161 [2024-11-19 21:26:49.892518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.161 [2024-11-19 21:26:49.892540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.161 [2024-11-19 21:26:49.892563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.161 [2024-11-19 21:26:49.905801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.161 [2024-11-19 21:26:49.906277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.161 [2024-11-19 21:26:49.906318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.161 [2024-11-19 21:26:49.906344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.161 [2024-11-19 21:26:49.906622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.161 [2024-11-19 21:26:49.906909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.161 [2024-11-19 21:26:49.906941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.161 [2024-11-19 21:26:49.906964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.161 [2024-11-19 21:26:49.906985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.161 [2024-11-19 21:26:49.920270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.161 [2024-11-19 21:26:49.920735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.161 [2024-11-19 21:26:49.920776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.161 [2024-11-19 21:26:49.920803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.161 [2024-11-19 21:26:49.921093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.161 [2024-11-19 21:26:49.921384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.161 [2024-11-19 21:26:49.921414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.161 [2024-11-19 21:26:49.921436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.161 [2024-11-19 21:26:49.921458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.161 [2024-11-19 21:26:49.934757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.161 [2024-11-19 21:26:49.935210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.161 [2024-11-19 21:26:49.935252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.161 [2024-11-19 21:26:49.935278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.161 [2024-11-19 21:26:49.935558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.161 [2024-11-19 21:26:49.935840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.161 [2024-11-19 21:26:49.935870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.161 [2024-11-19 21:26:49.935892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.161 [2024-11-19 21:26:49.935913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.161 [2024-11-19 21:26:49.949152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.161 [2024-11-19 21:26:49.949676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.161 [2024-11-19 21:26:49.949717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.161 [2024-11-19 21:26:49.949743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.161 [2024-11-19 21:26:49.950022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.161 [2024-11-19 21:26:49.950312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.161 [2024-11-19 21:26:49.950350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.161 [2024-11-19 21:26:49.950388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.161 [2024-11-19 21:26:49.950411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.420 2550.80 IOPS, 9.96 MiB/s [2024-11-19T20:26:50.215Z] [2024-11-19 21:26:49.965523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.420 [2024-11-19 21:26:49.965970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.420 [2024-11-19 21:26:49.966011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.420 [2024-11-19 21:26:49.966036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.420 [2024-11-19 21:26:49.966325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.420 [2024-11-19 21:26:49.966607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.420 [2024-11-19 21:26:49.966638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.420 [2024-11-19 21:26:49.966660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.420 [2024-11-19 21:26:49.966690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.420 [2024-11-19 21:26:49.979952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.420 [2024-11-19 21:26:49.980432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.420 [2024-11-19 21:26:49.980474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.420 [2024-11-19 21:26:49.980500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.420 [2024-11-19 21:26:49.980778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.420 [2024-11-19 21:26:49.981059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.420 [2024-11-19 21:26:49.981171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.420 [2024-11-19 21:26:49.981195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.420 [2024-11-19 21:26:49.981217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.420 [2024-11-19 21:26:49.994491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.420 [2024-11-19 21:26:49.994944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.421 [2024-11-19 21:26:49.994984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.421 [2024-11-19 21:26:49.995010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.421 [2024-11-19 21:26:49.995302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.421 [2024-11-19 21:26:49.995583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.421 [2024-11-19 21:26:49.995613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.421 [2024-11-19 21:26:49.995635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.421 [2024-11-19 21:26:49.995666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.421 [2024-11-19 21:26:50.008939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.421 [2024-11-19 21:26:50.009388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.421 [2024-11-19 21:26:50.009430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.421 [2024-11-19 21:26:50.009457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.421 [2024-11-19 21:26:50.009737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.421 [2024-11-19 21:26:50.010018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.421 [2024-11-19 21:26:50.010048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.421 [2024-11-19 21:26:50.010080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.421 [2024-11-19 21:26:50.010106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.421 [2024-11-19 21:26:50.023314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.421 [2024-11-19 21:26:50.023814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.421 [2024-11-19 21:26:50.023860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.421 [2024-11-19 21:26:50.023888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.421 [2024-11-19 21:26:50.024184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.421 [2024-11-19 21:26:50.024472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.421 [2024-11-19 21:26:50.024504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.421 [2024-11-19 21:26:50.024527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.421 [2024-11-19 21:26:50.024549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3165943 Killed "${NVMF_APP[@]}" "$@" 00:37:16.421 21:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:16.421 21:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:16.421 21:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:16.421 21:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:16.421 21:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:16.421 21:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3167136 00:37:16.421 21:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:16.421 21:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3167136 00:37:16.421 21:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3167136 ']' 00:37:16.421 21:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:16.421 21:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:16.421 21:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:16.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:16.421 21:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:16.421 21:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:16.421 [2024-11-19 21:26:50.037959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.421 [2024-11-19 21:26:50.038433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.421 [2024-11-19 21:26:50.038478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.421 [2024-11-19 21:26:50.038505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.421 [2024-11-19 21:26:50.038797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.421 [2024-11-19 21:26:50.039099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.421 [2024-11-19 21:26:50.039132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.421 [2024-11-19 21:26:50.039157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.421 [2024-11-19 21:26:50.039180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.421 [2024-11-19 21:26:50.052506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.421 [2024-11-19 21:26:50.052951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.421 [2024-11-19 21:26:50.052996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.421 [2024-11-19 21:26:50.053024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.421 [2024-11-19 21:26:50.053323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.421 [2024-11-19 21:26:50.053615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.421 [2024-11-19 21:26:50.053646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.421 [2024-11-19 21:26:50.053669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.421 [2024-11-19 21:26:50.053692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.421 [2024-11-19 21:26:50.067262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.421 [2024-11-19 21:26:50.067768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.421 [2024-11-19 21:26:50.067827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.421 [2024-11-19 21:26:50.067855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.421 [2024-11-19 21:26:50.068151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.421 [2024-11-19 21:26:50.068438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.421 [2024-11-19 21:26:50.068469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.421 [2024-11-19 21:26:50.068492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.421 [2024-11-19 21:26:50.068515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.421 [2024-11-19 21:26:50.081899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.421 [2024-11-19 21:26:50.082392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.421 [2024-11-19 21:26:50.082450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.421 [2024-11-19 21:26:50.082477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.421 [2024-11-19 21:26:50.082760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.421 [2024-11-19 21:26:50.083054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.421 [2024-11-19 21:26:50.083102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.421 [2024-11-19 21:26:50.083125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.421 [2024-11-19 21:26:50.083148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.421 [2024-11-19 21:26:50.096399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.421 [2024-11-19 21:26:50.096867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.421 [2024-11-19 21:26:50.096909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.421 [2024-11-19 21:26:50.096936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.421 [2024-11-19 21:26:50.097229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.421 [2024-11-19 21:26:50.097513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.421 [2024-11-19 21:26:50.097543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.421 [2024-11-19 21:26:50.097566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.421 [2024-11-19 21:26:50.097588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.421 [2024-11-19 21:26:50.110773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.421 [2024-11-19 21:26:50.111240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.421 [2024-11-19 21:26:50.111282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.421 [2024-11-19 21:26:50.111308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.421 [2024-11-19 21:26:50.111589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.421 [2024-11-19 21:26:50.111873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.422 [2024-11-19 21:26:50.111903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.422 [2024-11-19 21:26:50.111925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.422 [2024-11-19 21:26:50.111957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.422 [2024-11-19 21:26:50.125164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.422 [2024-11-19 21:26:50.125647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.422 [2024-11-19 21:26:50.125712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.422 [2024-11-19 21:26:50.125702] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:37:16.422 [2024-11-19 21:26:50.125740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.422 [2024-11-19 21:26:50.125835] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:16.422 [2024-11-19 21:26:50.126027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.422 [2024-11-19 21:26:50.126329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.422 [2024-11-19 21:26:50.126361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.422 [2024-11-19 21:26:50.126384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.422 [2024-11-19 21:26:50.126414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.422 [2024-11-19 21:26:50.139614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.422 [2024-11-19 21:26:50.140117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.422 [2024-11-19 21:26:50.140159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.422 [2024-11-19 21:26:50.140186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.422 [2024-11-19 21:26:50.140468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.422 [2024-11-19 21:26:50.140750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.422 [2024-11-19 21:26:50.140780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.422 [2024-11-19 21:26:50.140803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.422 [2024-11-19 21:26:50.140825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.422 [2024-11-19 21:26:50.153961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.422 [2024-11-19 21:26:50.154426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.422 [2024-11-19 21:26:50.154468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.422 [2024-11-19 21:26:50.154494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.422 [2024-11-19 21:26:50.154775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.422 [2024-11-19 21:26:50.155058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.422 [2024-11-19 21:26:50.155099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.422 [2024-11-19 21:26:50.155123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.422 [2024-11-19 21:26:50.155145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.422 [2024-11-19 21:26:50.168428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.422 [2024-11-19 21:26:50.168890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.422 [2024-11-19 21:26:50.168931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.422 [2024-11-19 21:26:50.168957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.422 [2024-11-19 21:26:50.169262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.422 [2024-11-19 21:26:50.169546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.422 [2024-11-19 21:26:50.169577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.422 [2024-11-19 21:26:50.169599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.422 [2024-11-19 21:26:50.169621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.422 [2024-11-19 21:26:50.182812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.422 [2024-11-19 21:26:50.183271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.422 [2024-11-19 21:26:50.183313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.422 [2024-11-19 21:26:50.183339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.422 [2024-11-19 21:26:50.183621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.422 [2024-11-19 21:26:50.183904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.422 [2024-11-19 21:26:50.183935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.422 [2024-11-19 21:26:50.183958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.422 [2024-11-19 21:26:50.183980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.422 [2024-11-19 21:26:50.197192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.422 [2024-11-19 21:26:50.197677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.422 [2024-11-19 21:26:50.197737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.422 [2024-11-19 21:26:50.197764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.422 [2024-11-19 21:26:50.198047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.422 [2024-11-19 21:26:50.198340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.422 [2024-11-19 21:26:50.198372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.422 [2024-11-19 21:26:50.198394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.422 [2024-11-19 21:26:50.198416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.422 [2024-11-19 21:26:50.211568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.422 [2024-11-19 21:26:50.212039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.422 [2024-11-19 21:26:50.212090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.422 [2024-11-19 21:26:50.212125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.422 [2024-11-19 21:26:50.212428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.422 [2024-11-19 21:26:50.212721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.422 [2024-11-19 21:26:50.212751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.422 [2024-11-19 21:26:50.212774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.422 [2024-11-19 21:26:50.212796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.682 [2024-11-19 21:26:50.226110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.682 [2024-11-19 21:26:50.226596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.682 [2024-11-19 21:26:50.226639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.682 [2024-11-19 21:26:50.226666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.682 [2024-11-19 21:26:50.226950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.682 [2024-11-19 21:26:50.227257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.682 [2024-11-19 21:26:50.227289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.682 [2024-11-19 21:26:50.227312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.682 [2024-11-19 21:26:50.227336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.682 [2024-11-19 21:26:50.240712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.682 [2024-11-19 21:26:50.241183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.682 [2024-11-19 21:26:50.241225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.682 [2024-11-19 21:26:50.241251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.682 [2024-11-19 21:26:50.241543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.682 [2024-11-19 21:26:50.241830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.682 [2024-11-19 21:26:50.241861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.682 [2024-11-19 21:26:50.241884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.682 [2024-11-19 21:26:50.241906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.682 [2024-11-19 21:26:50.255300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.682 [2024-11-19 21:26:50.255778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.682 [2024-11-19 21:26:50.255820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.682 [2024-11-19 21:26:50.255846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.682 [2024-11-19 21:26:50.256149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.682 [2024-11-19 21:26:50.256453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.682 [2024-11-19 21:26:50.256485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.682 [2024-11-19 21:26:50.256507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.682 [2024-11-19 21:26:50.256530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.682 [2024-11-19 21:26:50.269862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.682 [2024-11-19 21:26:50.270326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.682 [2024-11-19 21:26:50.270378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.682 [2024-11-19 21:26:50.270404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.682 [2024-11-19 21:26:50.270689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.682 [2024-11-19 21:26:50.270975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.682 [2024-11-19 21:26:50.271007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.682 [2024-11-19 21:26:50.271030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.682 [2024-11-19 21:26:50.271064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.682 [2024-11-19 21:26:50.284494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.682 [2024-11-19 21:26:50.284952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.682 [2024-11-19 21:26:50.284993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.682 [2024-11-19 21:26:50.285019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.682 [2024-11-19 21:26:50.285313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.682 [2024-11-19 21:26:50.285612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.682 [2024-11-19 21:26:50.285643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.682 [2024-11-19 21:26:50.285666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.682 [2024-11-19 21:26:50.285688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.682 [2024-11-19 21:26:50.291247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:16.682 [2024-11-19 21:26:50.299039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.682 [2024-11-19 21:26:50.299467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.682 [2024-11-19 21:26:50.299508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.682 [2024-11-19 21:26:50.299535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.682 [2024-11-19 21:26:50.299819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.682 [2024-11-19 21:26:50.300118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.682 [2024-11-19 21:26:50.300155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.682 [2024-11-19 21:26:50.300178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.682 [2024-11-19 21:26:50.300200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.682 [2024-11-19 21:26:50.313684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.682 [2024-11-19 21:26:50.314297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.682 [2024-11-19 21:26:50.314365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.682 [2024-11-19 21:26:50.314396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.682 [2024-11-19 21:26:50.314695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.682 [2024-11-19 21:26:50.314994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.682 [2024-11-19 21:26:50.315027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.682 [2024-11-19 21:26:50.315053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.682 [2024-11-19 21:26:50.315092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.682 [2024-11-19 21:26:50.328398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.682 [2024-11-19 21:26:50.328910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.682 [2024-11-19 21:26:50.328951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.682 [2024-11-19 21:26:50.328978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.682 [2024-11-19 21:26:50.329281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.682 [2024-11-19 21:26:50.329569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.682 [2024-11-19 21:26:50.329600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.683 [2024-11-19 21:26:50.329623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.683 [2024-11-19 21:26:50.329645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.683 [2024-11-19 21:26:50.343052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.683 [2024-11-19 21:26:50.343491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.683 [2024-11-19 21:26:50.343533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.683 [2024-11-19 21:26:50.343560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.683 [2024-11-19 21:26:50.343854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.683 [2024-11-19 21:26:50.344156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.683 [2024-11-19 21:26:50.344188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.683 [2024-11-19 21:26:50.344210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.683 [2024-11-19 21:26:50.344239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.683 [2024-11-19 21:26:50.357542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.683 [2024-11-19 21:26:50.358036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.683 [2024-11-19 21:26:50.358097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.683 [2024-11-19 21:26:50.358126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.683 [2024-11-19 21:26:50.358416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.683 [2024-11-19 21:26:50.358711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.683 [2024-11-19 21:26:50.358749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.683 [2024-11-19 21:26:50.358771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.683 [2024-11-19 21:26:50.358794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.683 [2024-11-19 21:26:50.372016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.683 [2024-11-19 21:26:50.372497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.683 [2024-11-19 21:26:50.372564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.683 [2024-11-19 21:26:50.372591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.683 [2024-11-19 21:26:50.372883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.683 [2024-11-19 21:26:50.373182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.683 [2024-11-19 21:26:50.373214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.683 [2024-11-19 21:26:50.373237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.683 [2024-11-19 21:26:50.373258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.683 [2024-11-19 21:26:50.386505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.683 [2024-11-19 21:26:50.386961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.683 [2024-11-19 21:26:50.387003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.683 [2024-11-19 21:26:50.387041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.683 [2024-11-19 21:26:50.387336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.683 [2024-11-19 21:26:50.387624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.683 [2024-11-19 21:26:50.387655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.683 [2024-11-19 21:26:50.387678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.683 [2024-11-19 21:26:50.387700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.683 [2024-11-19 21:26:50.401001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.683 [2024-11-19 21:26:50.401513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.683 [2024-11-19 21:26:50.401554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.683 [2024-11-19 21:26:50.401581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.683 [2024-11-19 21:26:50.401867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.683 [2024-11-19 21:26:50.402178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.683 [2024-11-19 21:26:50.402210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.683 [2024-11-19 21:26:50.402233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.683 [2024-11-19 21:26:50.402255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.683 [2024-11-19 21:26:50.415564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.683 [2024-11-19 21:26:50.416015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.683 [2024-11-19 21:26:50.416057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.683 [2024-11-19 21:26:50.416092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.683 [2024-11-19 21:26:50.416379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.683 [2024-11-19 21:26:50.416666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.683 [2024-11-19 21:26:50.416698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.683 [2024-11-19 21:26:50.416721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.683 [2024-11-19 21:26:50.416745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.683 [2024-11-19 21:26:50.430378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.683 [2024-11-19 21:26:50.430863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.683 [2024-11-19 21:26:50.430914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.683 [2024-11-19 21:26:50.430941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.683 [2024-11-19 21:26:50.431269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.683 [2024-11-19 21:26:50.431557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.683 [2024-11-19 21:26:50.431588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.683 [2024-11-19 21:26:50.431611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.683 [2024-11-19 21:26:50.431633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.683 [2024-11-19 21:26:50.435309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:16.683 [2024-11-19 21:26:50.435361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:16.683 [2024-11-19 21:26:50.435387] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:16.683 [2024-11-19 21:26:50.435430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:16.683 [2024-11-19 21:26:50.435450] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:16.683 [2024-11-19 21:26:50.438398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:16.683 [2024-11-19 21:26:50.438441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:16.683 [2024-11-19 21:26:50.438446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:16.683 [2024-11-19 21:26:50.445158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.683 [2024-11-19 21:26:50.445739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.683 [2024-11-19 21:26:50.445800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.683 [2024-11-19 21:26:50.445841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.683 [2024-11-19 21:26:50.446160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.683 [2024-11-19 21:26:50.446470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.683 [2024-11-19 21:26:50.446502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.683 [2024-11-19 21:26:50.446528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.683 [2024-11-19 21:26:50.446558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.683 [2024-11-19 21:26:50.459897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.683 [2024-11-19 21:26:50.460554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.683 [2024-11-19 21:26:50.460607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.683 [2024-11-19 21:26:50.460644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.684 [2024-11-19 21:26:50.460942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.684 [2024-11-19 21:26:50.461256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.684 [2024-11-19 21:26:50.461289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.684 [2024-11-19 21:26:50.461316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.684 [2024-11-19 21:26:50.461341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.943 [2024-11-19 21:26:50.474633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.943 [2024-11-19 21:26:50.475109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.943 [2024-11-19 21:26:50.475152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.943 [2024-11-19 21:26:50.475179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.943 [2024-11-19 21:26:50.475469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.943 [2024-11-19 21:26:50.475757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.943 [2024-11-19 21:26:50.475788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.943 [2024-11-19 21:26:50.475818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.943 [2024-11-19 21:26:50.475842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.943 [2024-11-19 21:26:50.489210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.943 [2024-11-19 21:26:50.489688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.943 [2024-11-19 21:26:50.489730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.943 [2024-11-19 21:26:50.489758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.943 [2024-11-19 21:26:50.490045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.943 [2024-11-19 21:26:50.490349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.943 [2024-11-19 21:26:50.490385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.943 [2024-11-19 21:26:50.490407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.943 [2024-11-19 21:26:50.490429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.943 [2024-11-19 21:26:50.503680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.943 [2024-11-19 21:26:50.504152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.943 [2024-11-19 21:26:50.504194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.943 [2024-11-19 21:26:50.504220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.943 [2024-11-19 21:26:50.504506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.943 [2024-11-19 21:26:50.504793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.943 [2024-11-19 21:26:50.504824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.943 [2024-11-19 21:26:50.504847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.943 [2024-11-19 21:26:50.504869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.943 [2024-11-19 21:26:50.518099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.943 [2024-11-19 21:26:50.518584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.943 [2024-11-19 21:26:50.518628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.943 [2024-11-19 21:26:50.518655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.943 [2024-11-19 21:26:50.518951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.943 [2024-11-19 21:26:50.519254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.943 [2024-11-19 21:26:50.519286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.943 [2024-11-19 21:26:50.519310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.943 [2024-11-19 21:26:50.519332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.943 [2024-11-19 21:26:50.532622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.943 [2024-11-19 21:26:50.533263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.943 [2024-11-19 21:26:50.533320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.943 [2024-11-19 21:26:50.533350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.943 [2024-11-19 21:26:50.533645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.943 [2024-11-19 21:26:50.533939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.943 [2024-11-19 21:26:50.533972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.943 [2024-11-19 21:26:50.533999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.943 [2024-11-19 21:26:50.534025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.943 [2024-11-19 21:26:50.547376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.943 [2024-11-19 21:26:50.548017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.943 [2024-11-19 21:26:50.548088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.943 [2024-11-19 21:26:50.548121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.943 [2024-11-19 21:26:50.548422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.943 [2024-11-19 21:26:50.548718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.943 [2024-11-19 21:26:50.548750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.943 [2024-11-19 21:26:50.548776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.943 [2024-11-19 21:26:50.548802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.943 [2024-11-19 21:26:50.562031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.943 [2024-11-19 21:26:50.562515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.943 [2024-11-19 21:26:50.562558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.943 [2024-11-19 21:26:50.562586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.943 [2024-11-19 21:26:50.562874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.944 [2024-11-19 21:26:50.563178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.944 [2024-11-19 21:26:50.563210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.944 [2024-11-19 21:26:50.563233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.944 [2024-11-19 21:26:50.563255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.944 [2024-11-19 21:26:50.576641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.944 [2024-11-19 21:26:50.577094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.944 [2024-11-19 21:26:50.577136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.944 [2024-11-19 21:26:50.577169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.944 [2024-11-19 21:26:50.577475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.944 [2024-11-19 21:26:50.577765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.944 [2024-11-19 21:26:50.577796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.944 [2024-11-19 21:26:50.577819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.944 [2024-11-19 21:26:50.577841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.944 [2024-11-19 21:26:50.591135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.944 [2024-11-19 21:26:50.591545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.944 [2024-11-19 21:26:50.591588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.944 [2024-11-19 21:26:50.591615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.944 [2024-11-19 21:26:50.591899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.944 [2024-11-19 21:26:50.592203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.944 [2024-11-19 21:26:50.592235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.944 [2024-11-19 21:26:50.592258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.944 [2024-11-19 21:26:50.592280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.944 [2024-11-19 21:26:50.605679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.944 [2024-11-19 21:26:50.606112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.944 [2024-11-19 21:26:50.606155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.944 [2024-11-19 21:26:50.606182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.944 [2024-11-19 21:26:50.606466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.944 [2024-11-19 21:26:50.606753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.944 [2024-11-19 21:26:50.606784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.944 [2024-11-19 21:26:50.606807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.944 [2024-11-19 21:26:50.606829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.944 [2024-11-19 21:26:50.620152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.944 [2024-11-19 21:26:50.620588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.944 [2024-11-19 21:26:50.620630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.944 [2024-11-19 21:26:50.620656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.944 [2024-11-19 21:26:50.620945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.944 [2024-11-19 21:26:50.621275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.944 [2024-11-19 21:26:50.621308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.944 [2024-11-19 21:26:50.621331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.944 [2024-11-19 21:26:50.621362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.944 [2024-11-19 21:26:50.634594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.944 [2024-11-19 21:26:50.635034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.944 [2024-11-19 21:26:50.635084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.944 [2024-11-19 21:26:50.635113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.944 [2024-11-19 21:26:50.635405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.944 [2024-11-19 21:26:50.635689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.944 [2024-11-19 21:26:50.635720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.944 [2024-11-19 21:26:50.635742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.944 [2024-11-19 21:26:50.635764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.944 [2024-11-19 21:26:50.648995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.944 [2024-11-19 21:26:50.649448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.944 [2024-11-19 21:26:50.649500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.944 [2024-11-19 21:26:50.649527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.944 [2024-11-19 21:26:50.649807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.944 [2024-11-19 21:26:50.650106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.944 [2024-11-19 21:26:50.650137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.944 [2024-11-19 21:26:50.650160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.944 [2024-11-19 21:26:50.650182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.944 [2024-11-19 21:26:50.663397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.944 [2024-11-19 21:26:50.663851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.944 [2024-11-19 21:26:50.663892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.944 [2024-11-19 21:26:50.663919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.944 [2024-11-19 21:26:50.664213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.944 [2024-11-19 21:26:50.664508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.944 [2024-11-19 21:26:50.664545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.944 [2024-11-19 21:26:50.664568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.944 [2024-11-19 21:26:50.664590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.944 [2024-11-19 21:26:50.677831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.944 [2024-11-19 21:26:50.678330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.944 [2024-11-19 21:26:50.678375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.944 [2024-11-19 21:26:50.678403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.944 [2024-11-19 21:26:50.678687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.944 [2024-11-19 21:26:50.678976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.944 [2024-11-19 21:26:50.679008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.944 [2024-11-19 21:26:50.679031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.944 [2024-11-19 21:26:50.679064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.944 [2024-11-19 21:26:50.692521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.944 [2024-11-19 21:26:50.693182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.944 [2024-11-19 21:26:50.693236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.944 [2024-11-19 21:26:50.693268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.944 [2024-11-19 21:26:50.693566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.944 [2024-11-19 21:26:50.693864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.944 [2024-11-19 21:26:50.693896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.944 [2024-11-19 21:26:50.693922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.944 [2024-11-19 21:26:50.693949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.944 [2024-11-19 21:26:50.707280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.944 [2024-11-19 21:26:50.707823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.944 [2024-11-19 21:26:50.707873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.944 [2024-11-19 21:26:50.707905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.945 [2024-11-19 21:26:50.708210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.945 [2024-11-19 21:26:50.708503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.945 [2024-11-19 21:26:50.708535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.945 [2024-11-19 21:26:50.708562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.945 [2024-11-19 21:26:50.708593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.945 [2024-11-19 21:26:50.721981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.945 [2024-11-19 21:26:50.722480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.945 [2024-11-19 21:26:50.722521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.945 [2024-11-19 21:26:50.722548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.945 [2024-11-19 21:26:50.722833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.945 [2024-11-19 21:26:50.723137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.945 [2024-11-19 21:26:50.723169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.945 [2024-11-19 21:26:50.723192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.945 [2024-11-19 21:26:50.723214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.204 [2024-11-19 21:26:50.736620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.204 [2024-11-19 21:26:50.737086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-11-19 21:26:50.737128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.204 [2024-11-19 21:26:50.737154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.204 [2024-11-19 21:26:50.737439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.204 [2024-11-19 21:26:50.737725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.204 [2024-11-19 21:26:50.737757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.204 [2024-11-19 21:26:50.737780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.204 [2024-11-19 21:26:50.737802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.204 [2024-11-19 21:26:50.751201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.204 [2024-11-19 21:26:50.751628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-11-19 21:26:50.751669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.204 [2024-11-19 21:26:50.751696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.204 [2024-11-19 21:26:50.751982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.204 [2024-11-19 21:26:50.752282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.204 [2024-11-19 21:26:50.752314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.204 [2024-11-19 21:26:50.752338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.204 [2024-11-19 21:26:50.752360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.204 [2024-11-19 21:26:50.765688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.204 [2024-11-19 21:26:50.766210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-11-19 21:26:50.766253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.204 [2024-11-19 21:26:50.766280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.204 [2024-11-19 21:26:50.766565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.204 [2024-11-19 21:26:50.766851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.204 [2024-11-19 21:26:50.766882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.204 [2024-11-19 21:26:50.766904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.204 [2024-11-19 21:26:50.766927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.204 [2024-11-19 21:26:50.780191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.204 [2024-11-19 21:26:50.780648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-11-19 21:26:50.780690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.204 [2024-11-19 21:26:50.780716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.204 [2024-11-19 21:26:50.781000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.204 [2024-11-19 21:26:50.781298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.204 [2024-11-19 21:26:50.781343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.204 [2024-11-19 21:26:50.781366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.204 [2024-11-19 21:26:50.781388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.204 [2024-11-19 21:26:50.794652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.204 [2024-11-19 21:26:50.795157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-11-19 21:26:50.795201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.204 [2024-11-19 21:26:50.795228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.204 [2024-11-19 21:26:50.795515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.204 [2024-11-19 21:26:50.795802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.204 [2024-11-19 21:26:50.795833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.204 [2024-11-19 21:26:50.795856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.204 [2024-11-19 21:26:50.795878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.204 [2024-11-19 21:26:50.809254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.204 [2024-11-19 21:26:50.809764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-11-19 21:26:50.809809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.204 [2024-11-19 21:26:50.809843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.204 [2024-11-19 21:26:50.810147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.204 [2024-11-19 21:26:50.810438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.204 [2024-11-19 21:26:50.810469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.204 [2024-11-19 21:26:50.810493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.204 [2024-11-19 21:26:50.810516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.204 [2024-11-19 21:26:50.823771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.204 [2024-11-19 21:26:50.824252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-11-19 21:26:50.824294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.204 [2024-11-19 21:26:50.824320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.204 [2024-11-19 21:26:50.824608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.204 [2024-11-19 21:26:50.824898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.204 [2024-11-19 21:26:50.824929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.205 [2024-11-19 21:26:50.824952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.205 [2024-11-19 21:26:50.824975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.205 [2024-11-19 21:26:50.838416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.205 [2024-11-19 21:26:50.838860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-11-19 21:26:50.838901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.205 [2024-11-19 21:26:50.838928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.205 [2024-11-19 21:26:50.839225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.205 [2024-11-19 21:26:50.839515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.205 [2024-11-19 21:26:50.839546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.205 [2024-11-19 21:26:50.839569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.205 [2024-11-19 21:26:50.839591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.205 [2024-11-19 21:26:50.852926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.205 [2024-11-19 21:26:50.853385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-11-19 21:26:50.853427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.205 [2024-11-19 21:26:50.853453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.205 [2024-11-19 21:26:50.853734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.205 [2024-11-19 21:26:50.854026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.205 [2024-11-19 21:26:50.854059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.205 [2024-11-19 21:26:50.854093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.205 [2024-11-19 21:26:50.854117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.205 [2024-11-19 21:26:50.867416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.205 [2024-11-19 21:26:50.867843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-11-19 21:26:50.867885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.205 [2024-11-19 21:26:50.867911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.205 [2024-11-19 21:26:50.868203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.205 [2024-11-19 21:26:50.868517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.205 [2024-11-19 21:26:50.868549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.205 [2024-11-19 21:26:50.868571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.205 [2024-11-19 21:26:50.868593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.205 [2024-11-19 21:26:50.881779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.205 [2024-11-19 21:26:50.882210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-11-19 21:26:50.882251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.205 [2024-11-19 21:26:50.882278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.205 [2024-11-19 21:26:50.882560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.205 [2024-11-19 21:26:50.882845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.205 [2024-11-19 21:26:50.882877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.205 [2024-11-19 21:26:50.882899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.205 [2024-11-19 21:26:50.882922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.205 [2024-11-19 21:26:50.896375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.205 [2024-11-19 21:26:50.896834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-11-19 21:26:50.896875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.205 [2024-11-19 21:26:50.896902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.205 [2024-11-19 21:26:50.897195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.205 [2024-11-19 21:26:50.897478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.205 [2024-11-19 21:26:50.897509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.205 [2024-11-19 21:26:50.897541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.205 [2024-11-19 21:26:50.897565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.205 [2024-11-19 21:26:50.910938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.205 [2024-11-19 21:26:50.911413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-11-19 21:26:50.911454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.205 [2024-11-19 21:26:50.911481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.205 [2024-11-19 21:26:50.911762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.205 [2024-11-19 21:26:50.912044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.205 [2024-11-19 21:26:50.912084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.205 [2024-11-19 21:26:50.912110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.205 [2024-11-19 21:26:50.912132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.205 [2024-11-19 21:26:50.925510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.205 [2024-11-19 21:26:50.925930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-11-19 21:26:50.925971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.205 [2024-11-19 21:26:50.925997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.205 [2024-11-19 21:26:50.926296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.205 [2024-11-19 21:26:50.926579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.205 [2024-11-19 21:26:50.926610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.205 [2024-11-19 21:26:50.926633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.205 [2024-11-19 21:26:50.926654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.205 [2024-11-19 21:26:50.940092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.205 [2024-11-19 21:26:50.940550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-11-19 21:26:50.940593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.205 [2024-11-19 21:26:50.940619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.205 [2024-11-19 21:26:50.940901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.205 [2024-11-19 21:26:50.941198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.205 [2024-11-19 21:26:50.941230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.205 [2024-11-19 21:26:50.941255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.205 [2024-11-19 21:26:50.941277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.205 [2024-11-19 21:26:50.954525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.205 [2024-11-19 21:26:50.954957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-11-19 21:26:50.954998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.205 [2024-11-19 21:26:50.955024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.205 [2024-11-19 21:26:50.955327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.205 [2024-11-19 21:26:50.955616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.205 [2024-11-19 21:26:50.955647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.206 [2024-11-19 21:26:50.955670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.206 [2024-11-19 21:26:50.955691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.206 2125.67 IOPS, 8.30 MiB/s [2024-11-19T20:26:51.001Z] [2024-11-19 21:26:50.970724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.206 [2024-11-19 21:26:50.971190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.206 [2024-11-19 21:26:50.971232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.206 [2024-11-19 21:26:50.971260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.206 [2024-11-19 21:26:50.971540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.206 [2024-11-19 21:26:50.971824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.206 [2024-11-19 21:26:50.971856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.206 [2024-11-19 21:26:50.971878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.206 [2024-11-19 21:26:50.971900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.206 [2024-11-19 21:26:50.985279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.206 [2024-11-19 21:26:50.985726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.206 [2024-11-19 21:26:50.985767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.206 [2024-11-19 21:26:50.985794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.206 [2024-11-19 21:26:50.986085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.206 [2024-11-19 21:26:50.986369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.206 [2024-11-19 21:26:50.986400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.206 [2024-11-19 21:26:50.986424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.206 [2024-11-19 21:26:50.986462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.466 [2024-11-19 21:26:50.999845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.466 [2024-11-19 21:26:51.000272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.466 [2024-11-19 21:26:51.000314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.466 [2024-11-19 21:26:51.000340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.466 [2024-11-19 21:26:51.000623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.466 [2024-11-19 21:26:51.000905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.466 [2024-11-19 21:26:51.000936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.466 [2024-11-19 21:26:51.000958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.466 [2024-11-19 21:26:51.000980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.466 [2024-11-19 21:26:51.014370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.466 [2024-11-19 21:26:51.014818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.466 [2024-11-19 21:26:51.014859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.466 [2024-11-19 21:26:51.014885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.466 [2024-11-19 21:26:51.015180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.466 [2024-11-19 21:26:51.015464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.466 [2024-11-19 21:26:51.015496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.466 [2024-11-19 21:26:51.015519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.466 [2024-11-19 21:26:51.015541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.466 [2024-11-19 21:26:51.028918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.466 [2024-11-19 21:26:51.029366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.466 [2024-11-19 21:26:51.029408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.466 [2024-11-19 21:26:51.029435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.466 [2024-11-19 21:26:51.029716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.466 [2024-11-19 21:26:51.029998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.466 [2024-11-19 21:26:51.030030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.466 [2024-11-19 21:26:51.030053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.466 [2024-11-19 21:26:51.030085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.466 [2024-11-19 21:26:51.043468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.466 [2024-11-19 21:26:51.043915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.466 [2024-11-19 21:26:51.043956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.466 [2024-11-19 21:26:51.043988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.466 [2024-11-19 21:26:51.044281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.466 [2024-11-19 21:26:51.044566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.466 [2024-11-19 21:26:51.044597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.466 [2024-11-19 21:26:51.044621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.466 [2024-11-19 21:26:51.044643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.466 [2024-11-19 21:26:51.057974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.466 [2024-11-19 21:26:51.058405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.466 [2024-11-19 21:26:51.058448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.466 [2024-11-19 21:26:51.058474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.466 [2024-11-19 21:26:51.058755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.466 [2024-11-19 21:26:51.059037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.466 [2024-11-19 21:26:51.059077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.466 [2024-11-19 21:26:51.059103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.466 [2024-11-19 21:26:51.059125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.466 [2024-11-19 21:26:51.072470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.466 [2024-11-19 21:26:51.072905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.466 [2024-11-19 21:26:51.072948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.466 [2024-11-19 21:26:51.072975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.466 [2024-11-19 21:26:51.073267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.466 [2024-11-19 21:26:51.073551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.466 [2024-11-19 21:26:51.073582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.466 [2024-11-19 21:26:51.073605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.466 [2024-11-19 21:26:51.073626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.466 [2024-11-19 21:26:51.086923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.466 [2024-11-19 21:26:51.087329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.466 [2024-11-19 21:26:51.087370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.466 [2024-11-19 21:26:51.087399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.466 [2024-11-19 21:26:51.087698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.466 [2024-11-19 21:26:51.087987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.466 [2024-11-19 21:26:51.088019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.466 [2024-11-19 21:26:51.088042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.466 [2024-11-19 21:26:51.088064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.466 [2024-11-19 21:26:51.101360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.466 [2024-11-19 21:26:51.101823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.466 [2024-11-19 21:26:51.101864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.467 [2024-11-19 21:26:51.101889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.467 [2024-11-19 21:26:51.102182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.467 [2024-11-19 21:26:51.102464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.467 [2024-11-19 21:26:51.102494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.467 [2024-11-19 21:26:51.102517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.467 [2024-11-19 21:26:51.102539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.467 [2024-11-19 21:26:51.115461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.467 [2024-11-19 21:26:51.115866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.467 [2024-11-19 21:26:51.115905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.467 [2024-11-19 21:26:51.115930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.467 [2024-11-19 21:26:51.116196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.467 [2024-11-19 21:26:51.116463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.467 [2024-11-19 21:26:51.116491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.467 [2024-11-19 21:26:51.116512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.467 [2024-11-19 21:26:51.116532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.467 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:17.467 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:17.467 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:17.467 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:17.467 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:17.467 [2024-11-19 21:26:51.129699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.467 [2024-11-19 21:26:51.130118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.467 [2024-11-19 21:26:51.130156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.467 [2024-11-19 21:26:51.130180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.467 [2024-11-19 21:26:51.130463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.467 [2024-11-19 21:26:51.130714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.467 [2024-11-19 21:26:51.130742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.467 [2024-11-19 21:26:51.130761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.467 [2024-11-19 21:26:51.130781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.467 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:17.467 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:17.467 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.467 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:17.467 [2024-11-19 21:26:51.140167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:17.467 [2024-11-19 21:26:51.143884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.467 [2024-11-19 21:26:51.144339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.467 [2024-11-19 21:26:51.144381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.467 [2024-11-19 21:26:51.144404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.467 [2024-11-19 21:26:51.144684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.467 [2024-11-19 21:26:51.144941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.467 [2024-11-19 21:26:51.144968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.467 [2024-11-19 21:26:51.144987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.467 [2024-11-19 21:26:51.145007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.467 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.467 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:17.467 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.467 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:17.467 [2024-11-19 21:26:51.158030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.467 [2024-11-19 21:26:51.158557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.467 [2024-11-19 21:26:51.158597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.467 [2024-11-19 21:26:51.158622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.467 [2024-11-19 21:26:51.158906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.467 [2024-11-19 21:26:51.159187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.467 [2024-11-19 21:26:51.159215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.467 [2024-11-19 21:26:51.159237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.467 [2024-11-19 21:26:51.159265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.467 [2024-11-19 21:26:51.172145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.467 [2024-11-19 21:26:51.172778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.467 [2024-11-19 21:26:51.172828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.467 [2024-11-19 21:26:51.172856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.467 [2024-11-19 21:26:51.173168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.467 [2024-11-19 21:26:51.173451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.467 [2024-11-19 21:26:51.173479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.467 [2024-11-19 21:26:51.173502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.467 [2024-11-19 21:26:51.173525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.467 [2024-11-19 21:26:51.186370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.467 [2024-11-19 21:26:51.186870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.467 [2024-11-19 21:26:51.186909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.467 [2024-11-19 21:26:51.186935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.467 [2024-11-19 21:26:51.187205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.467 [2024-11-19 21:26:51.187482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.467 [2024-11-19 21:26:51.187510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.467 [2024-11-19 21:26:51.187530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.467 [2024-11-19 21:26:51.187550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.467 [2024-11-19 21:26:51.200418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.467 [2024-11-19 21:26:51.200840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.467 [2024-11-19 21:26:51.200877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.467 [2024-11-19 21:26:51.200901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.467 [2024-11-19 21:26:51.201174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.467 [2024-11-19 21:26:51.201451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.467 [2024-11-19 21:26:51.201479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.467 [2024-11-19 21:26:51.201499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.467 [2024-11-19 21:26:51.201518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.467 [2024-11-19 21:26:51.214486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.467 [2024-11-19 21:26:51.214893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.467 [2024-11-19 21:26:51.214931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.467 [2024-11-19 21:26:51.214955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.467 [2024-11-19 21:26:51.215221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.467 [2024-11-19 21:26:51.215490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.467 [2024-11-19 21:26:51.215517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.468 [2024-11-19 21:26:51.215537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.468 [2024-11-19 21:26:51.215556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.468 [2024-11-19 21:26:51.228552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.468 [2024-11-19 21:26:51.228962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.468 [2024-11-19 21:26:51.228999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.468 [2024-11-19 21:26:51.229023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.468 [2024-11-19 21:26:51.229287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.468 [2024-11-19 21:26:51.229555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.468 [2024-11-19 21:26:51.229583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.468 [2024-11-19 21:26:51.229603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.468 [2024-11-19 21:26:51.229621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.468 Malloc0 00:37:17.468 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.468 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:17.468 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.468 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:17.468 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.468 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:17.468 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.468 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:17.468 [2024-11-19 21:26:51.242713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.468 [2024-11-19 21:26:51.243127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.468 [2024-11-19 21:26:51.243165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:17.468 [2024-11-19 21:26:51.243189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:17.468 [2024-11-19 21:26:51.243462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:17.468 [2024-11-19 21:26:51.243712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:17.468 [2024-11-19 21:26:51.243744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:17.468 [2024-11-19 21:26:51.243765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:17.468 [2024-11-19 21:26:51.243785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:17.468 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.468 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:17.468 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.468 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:17.468 [2024-11-19 21:26:51.251016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:17.468 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.468 21:26:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3166473 00:37:17.468 [2024-11-19 21:26:51.256940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:17.726 [2024-11-19 21:26:51.292974] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:37:19.223 2399.71 IOPS, 9.37 MiB/s [2024-11-19T20:26:54.388Z] 2872.75 IOPS, 11.22 MiB/s [2024-11-19T20:26:55.335Z] 3253.11 IOPS, 12.71 MiB/s [2024-11-19T20:26:55.993Z] 3547.40 IOPS, 13.86 MiB/s [2024-11-19T20:26:57.367Z] 3796.91 IOPS, 14.83 MiB/s [2024-11-19T20:26:58.300Z] 3992.50 IOPS, 15.60 MiB/s [2024-11-19T20:26:59.234Z] 4168.00 IOPS, 16.28 MiB/s [2024-11-19T20:27:00.166Z] 4313.71 IOPS, 16.85 MiB/s [2024-11-19T20:27:00.166Z] 4442.60 IOPS, 17.35 MiB/s 00:37:26.371 Latency(us) 00:37:26.371 [2024-11-19T20:27:00.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:26.371 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:26.371 Verification LBA range: start 0x0 length 0x4000 00:37:26.371 Nvme1n1 : 15.05 4436.02 17.33 9079.07 0.00 9417.53 1092.27 44661.57 00:37:26.371 [2024-11-19T20:27:00.166Z] =================================================================================================================== 00:37:26.371 [2024-11-19T20:27:00.166Z] Total : 4436.02 17.33 9079.07 0.00 9417.53 1092.27 44661.57 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:27.305 rmmod nvme_tcp 00:37:27.305 rmmod nvme_fabrics 00:37:27.305 rmmod nvme_keyring 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3167136 ']' 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3167136 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3167136 ']' 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3167136 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:27.305 21:27:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3167136 00:37:27.305 21:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:27.305 21:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:27.305 21:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3167136' 00:37:27.305 killing process with pid 3167136 00:37:27.305 21:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3167136 00:37:27.305 21:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3167136 00:37:28.681 21:27:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:28.681 21:27:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:28.681 21:27:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:28.681 21:27:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:37:28.681 21:27:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:37:28.681 21:27:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:28.681 21:27:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:37:28.681 21:27:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:28.681 21:27:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:28.681 21:27:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:28.681 21:27:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:28.681 21:27:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:30.588 21:27:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:30.588 00:37:30.588 real 0m26.495s 00:37:30.588 user 1m12.641s 00:37:30.588 sys 0m4.783s 00:37:30.588 21:27:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:30.588 21:27:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:30.588 ************************************ 00:37:30.588 END TEST nvmf_bdevperf 00:37:30.588 ************************************ 00:37:30.588 21:27:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:30.588 21:27:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:30.588 21:27:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:30.588 21:27:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.588 ************************************ 00:37:30.588 START TEST nvmf_target_disconnect 00:37:30.588 ************************************ 00:37:30.588 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:30.588 * Looking for test storage... 00:37:30.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:30.588 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:30.588 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:37:30.588 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:30.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.847 --rc genhtml_branch_coverage=1 00:37:30.847 --rc genhtml_function_coverage=1 00:37:30.847 --rc genhtml_legend=1 00:37:30.847 --rc geninfo_all_blocks=1 00:37:30.847 --rc geninfo_unexecuted_blocks=1 00:37:30.847 00:37:30.847 ' 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:30.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.847 --rc genhtml_branch_coverage=1 00:37:30.847 --rc genhtml_function_coverage=1 00:37:30.847 --rc genhtml_legend=1 00:37:30.847 --rc geninfo_all_blocks=1 00:37:30.847 --rc geninfo_unexecuted_blocks=1 00:37:30.847 00:37:30.847 ' 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:30.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.847 --rc genhtml_branch_coverage=1 00:37:30.847 --rc genhtml_function_coverage=1 00:37:30.847 --rc genhtml_legend=1 00:37:30.847 --rc geninfo_all_blocks=1 00:37:30.847 --rc geninfo_unexecuted_blocks=1 00:37:30.847 00:37:30.847 ' 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:30.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.847 --rc genhtml_branch_coverage=1 00:37:30.847 --rc genhtml_function_coverage=1 00:37:30.847 --rc genhtml_legend=1 00:37:30.847 --rc geninfo_all_blocks=1 00:37:30.847 --rc geninfo_unexecuted_blocks=1 00:37:30.847 00:37:30.847 ' 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.847 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:30.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:37:30.848 21:27:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:32.752 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:32.752 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:37:32.752 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:32.752 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:32.752 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:32.752 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:32.752 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:32.752 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:37:32.752 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:32.752 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:37:32.752 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:37:32.752 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:37:32.752 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:37:32.752 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:37:32.752 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:37:32.752 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:32.752 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:32.752 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:32.752 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:32.753 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:32.753 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:32.753 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:32.753 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:32.753 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:33.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:33.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:37:33.012 00:37:33.012 --- 10.0.0.2 ping statistics --- 00:37:33.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:33.012 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:33.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:33.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:37:33.012 00:37:33.012 --- 10.0.0.1 ping statistics --- 00:37:33.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:33.012 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:33.012 ************************************ 00:37:33.012 START TEST nvmf_target_disconnect_tc1 00:37:33.012 ************************************ 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:37:33.012 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:33.271 [2024-11-19 21:27:06.839255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.271 [2024-11-19 21:27:06.839360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:37:33.271 [2024-11-19 21:27:06.839455] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:33.271 [2024-11-19 21:27:06.839507] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:33.271 [2024-11-19 21:27:06.839530] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:37:33.271 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:37:33.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:33.271 Initializing NVMe Controllers 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:33.271 00:37:33.271 real 0m0.225s 00:37:33.271 user 0m0.104s 00:37:33.271 sys 0m0.120s 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:33.271 ************************************ 00:37:33.271 END TEST nvmf_target_disconnect_tc1 00:37:33.271 ************************************ 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:33.271 ************************************ 00:37:33.271 START TEST nvmf_target_disconnect_tc2 00:37:33.271 ************************************ 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3170661 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3170661 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3170661 ']' 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:33.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:33.271 21:27:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:33.271 [2024-11-19 21:27:07.016453] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:37:33.271 [2024-11-19 21:27:07.016611] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:33.529 [2024-11-19 21:27:07.165228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:33.529 [2024-11-19 21:27:07.293752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:33.529 [2024-11-19 21:27:07.293826] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:33.529 [2024-11-19 21:27:07.293850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:33.529 [2024-11-19 21:27:07.293870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:33.529 [2024-11-19 21:27:07.293887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:33.529 [2024-11-19 21:27:07.296561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:33.529 [2024-11-19 21:27:07.296625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:33.529 [2024-11-19 21:27:07.296775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:33.529 [2024-11-19 21:27:07.296780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:34.464 21:27:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:34.464 21:27:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:37:34.464 21:27:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:34.464 21:27:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:34.464 21:27:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.464 Malloc0 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.464 [2024-11-19 21:27:08.110785] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.464 [2024-11-19 21:27:08.140716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3171178 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:34.464 21:27:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:36.364 21:27:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3170661 00:37:36.364 21:27:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:36.639 Read completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Read completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Read completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Read completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Write completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Read completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Read completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Read completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Read completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Read completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Read completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Read completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Read completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Write completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Write completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Read completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Read completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Read completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Read completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Read completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Read completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Write completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Write completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Write completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Read completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Write completed with error (sct=0, sc=8) 00:37:36.639 starting I/O failed 00:37:36.639 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 [2024-11-19 21:27:10.187481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 [2024-11-19 21:27:10.188194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 [2024-11-19 21:27:10.188822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Write completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 Read completed with error (sct=0, sc=8) 00:37:36.640 starting I/O failed 00:37:36.640 [2024-11-19 21:27:10.189481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:36.640 [2024-11-19 21:27:10.189760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 21:27:10.189808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 21:27:10.190089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.190147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.190298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.190333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.190493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.190531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.190693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.190727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.190875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.190909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.191036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.191078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.191230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.191264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.191420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.191455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.191566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.191600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.191762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.191800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.191957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.191991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.192131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.192165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.192273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.192307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.192478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.192512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.192644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.192678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.192814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.192848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.192958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.192993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.193125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.193159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.193287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.193394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.193667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.193703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.193844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.193879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.194014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.194047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.194243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.194292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.194462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.194511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.194633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.194672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.194820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.194855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.194966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.195001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.195158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.195193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.195347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.195392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.195543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.195578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.195690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.195725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.195836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.195877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.196129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.196164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.196275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.196309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.196455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.196489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.196602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.196636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.196744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.196778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 21:27:10.196916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 21:27:10.196950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.197135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.197183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.197337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.197376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.197552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.197604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.197821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.197859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.197980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.198015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.198145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.198181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.198322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.198357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.198467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.198502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.198664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.198699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.198865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.198900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.199043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.199085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.199194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.199229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.199352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.199405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.199548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.199584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.199793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.199831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.199945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.199983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.200175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.200222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.200343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.200391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.200537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.200573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.200680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.200715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.200859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.200897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.201055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.201112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.201234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.201270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.201454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.201489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.201672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.201706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.201841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.201877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.202152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.202189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.202303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.202338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.202489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.202542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.202689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.202722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.202866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.202901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.203067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.203112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.203216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.203249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.203354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.203395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.203606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.203650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.203811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.203845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.203984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 21:27:10.204019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 21:27:10.204243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.204277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.204457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.204506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.204632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.204684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.204850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.204885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.205052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.205093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.205199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.205233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.205373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.205437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.205638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.205706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.205851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.205889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.206063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.206123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.206237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.206272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.206439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.206485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.206646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.206711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.206940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.206975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.207092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.207140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.207305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.207339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.207485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.207541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.207706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.207746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.207909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.207943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.208082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.208117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.208224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.208258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.208402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.208435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.208549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.208588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.208725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.208766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.208899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.208932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.209037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.209085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.209207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.209242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.209370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.209410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.209636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.209707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.209944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.210019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.210183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.210219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.210362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.210397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.210527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.210562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.210662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.210697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.210847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.210882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.211022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.211056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.211182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.211216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 21:27:10.211331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 21:27:10.211365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.211498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.211532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.211637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.211672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.211822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.211862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.212028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.212063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.212193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.212227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.212329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.212371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.212504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.212539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.212697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.212730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.212860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.212896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.213000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.213034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.213212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.213262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.213521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.213556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.213670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.213704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.213920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.213981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.214159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.214197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.214346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.214399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.214718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.214792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.214923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.214963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.215129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.215165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.215304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.215339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.215547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.215607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.215734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.215789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.215923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.215961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.216148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.216203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.216393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.216447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.216684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.216726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.216858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.216893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.217002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.217037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.217200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.217236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.217367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.217402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 21:27:10.217545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 21:27:10.217579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.217719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.217753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.217914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.217955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.218090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.218125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.218250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.218284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.218463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.218498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.218629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.218663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.218832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.218923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.219086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.219143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.219286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.219334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.219551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.219593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.219813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.219878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.219990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.220027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.220198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.220233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.220383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.220420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.220619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.220695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.220877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.220915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.221054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.221101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.221270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.221304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.221441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.221475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.221593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.221655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.221837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.221875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.222042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.222083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.222198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.222231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.222366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.222404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.222518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.222551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.222673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.222710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.222816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.222853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.223033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.223067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.223209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.223244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.223403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.223456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.223701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.223738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.223873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.223907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.224062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.224121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.224260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.224293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.224471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.224514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.224636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.224673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.224859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 21:27:10.224902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 21:27:10.225026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.225064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.225210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.225244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.225477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.225511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.225646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.225679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.225835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.225872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.226051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.226108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.226285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.226333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.226540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.226581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.226791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.226850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.227037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.227082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.227252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.227286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.227456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.227519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.227718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.227817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.227965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.228002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.228182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.228217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.228368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.228416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.228542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.228579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.228782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.228841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.228980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.229015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.229205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.229253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.229412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.229452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.229556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.229591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.229760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.229794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.229934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.229972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.230101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.230153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.230330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.230383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.230530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.230566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.230703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.230755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.230925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.230959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.231100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.231144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.231276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.231310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.231457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.231490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.231621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.231655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.231759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.231796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.231944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.231979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.232137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.232192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.232330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.232364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.232583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 21:27:10.232654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 21:27:10.232769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.232803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.232945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.232980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.233120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.233155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.233279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.233327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.233519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.233560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.233778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.233839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.233994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.234033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.234181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.234218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.234358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.234407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.234628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.234693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.234906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.234963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.235129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.235164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.235322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.235357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.235474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.235508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.235667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.235701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.235882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.235920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.236077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.236145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.236316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.236352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.236503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.236555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.236734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.236792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.236991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.237047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.237201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.237237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.237349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.237383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.237532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.237567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.237718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.237757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.237916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.237950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.238086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.238125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.238273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.238307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.238471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.238509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.238685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.238740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.238872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.238906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.239083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.239149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.239314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.239348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.239481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.239519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.239703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.239737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.239886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.239931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.240150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.240186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 21:27:10.240318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 21:27:10.240370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.240547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.240584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.240763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.240806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.240930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.240968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.241153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.241202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.241370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.241418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.241586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.241622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.241787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.241825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.241952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.241986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.242154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.242202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.242335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.242377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.242515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.242548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.242674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.242708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.242838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.242872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.243058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.243116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.243274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.243310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.243506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.243543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.243685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.243737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.243877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.243911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.244063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.244118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.244237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.244274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.244422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.244462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.244586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.244638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.244776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.244810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.244950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.244984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.245139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.245175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.245291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.245325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.245453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.245487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.245642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.245680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.245832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.245870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.246132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.246179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.246358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.246411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.246594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.246634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 21:27:10.246851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 21:27:10.246886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.247086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.247122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.247228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.247262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.247411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.247445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.247594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.247633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.247890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.247945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.248139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.248175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.248306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.248341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.248473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.248511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.248623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.248667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.248798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.248837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.248986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.249024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.249199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.249233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.249385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.249427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.249583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.249621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.249811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.249852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.250026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.250088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.250249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.250297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.250461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.250501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.250758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.250816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.251019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.251075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.251244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.251278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.251434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.251472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.251679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.251742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.251907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.251961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.252100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.252138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.252267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.252301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.252467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.252520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.252773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.252807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.252951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.252986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.253128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.253176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.253363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.253403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 21:27:10.253528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 21:27:10.253566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.253770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.253830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.254007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.254044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.254183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.254217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.254331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.254366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.254538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.254592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.254777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.254816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.254996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.255034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.255182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.255217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.255409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.255448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.255589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.255627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.255774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.255813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.256065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.256123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.256242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.256278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.256440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.256494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.256627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.256680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.256855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.256903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.257039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.257087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.257239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.257275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.257472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.257510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.257624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.257662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.257813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.257851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.257978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.258013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.258141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.258210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.258419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.258471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.258605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.258645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.258869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.258903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.259016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.259051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.259226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.259260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.259378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.259415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.259560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.259598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.259753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.259791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.259951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.259987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.260156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.260192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.260344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.260408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.260559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.260611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.260861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.260919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.261061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.261107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 21:27:10.261293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 21:27:10.261329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.261485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.261531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.261683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.261724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.261868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.261902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.262036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.262083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.262236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.262284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.262472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.262514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.262630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.262669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.262942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.262998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.263191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.263228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.263382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.263431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.263616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.263657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.263927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.263986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.264165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.264200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.264314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.264348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.264504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.264541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.264725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.264763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.264915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.264953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.265062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.265120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.265255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.265294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.265418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.265455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.265606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.265644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.265829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.265895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.266027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.266097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.266263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.266317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.266500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.266551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.266771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.266822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.266960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.266995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.267130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.267164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.267316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.267368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.267521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.267576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.267792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.267854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.268029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.268067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 21:27:10.268248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 21:27:10.268286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.268465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.268503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.268705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.268774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.268911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.268948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.269129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.269165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.269336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.269400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.269616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.269676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.269870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.269927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.270054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.270099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.270245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.270280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.270469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.270512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.270645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.270748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.270899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.270937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.271127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.271162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.271337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.271385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.271548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.271600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.271758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.271810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.271944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.271979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.272130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.272179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.272374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.272414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.272565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.272603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.272753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.272806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.272928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.272966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.273160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.273207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.273376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.273416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.273536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.273574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.273694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.273738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.273890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.273938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.274103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.274152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.274267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.274303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.274473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.274508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.274727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.274766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.274884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.274922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.275098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.275166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.275419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.275485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.275648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.275702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.275832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.275866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.276007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 21:27:10.276042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 21:27:10.276191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.276232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.276374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.276427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.276569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.276610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.276760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.276800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.276964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.277003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.277200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.277249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.277413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.277453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.277562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.277599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.277754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.277793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.277898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.277935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.278050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.278111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.278255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.278303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.278484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.278538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.278644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.278679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.278807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.278842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.279006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.279042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.279171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.279210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.279391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.279431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.279584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.279622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.279812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.279881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.280020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.280059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.280239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.280275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.280426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.280478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.280629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.280681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.280819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.280879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.281018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.281052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.281247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.281294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.281443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.281480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.281638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.281680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.281819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.281853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.282009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.282056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.282247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.282284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.282447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.282485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.282663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.282700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.282819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 21:27:10.282856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 21:27:10.283043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.283092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.283231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.283265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.283384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.283423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.283583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.283639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.283889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.283947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.284086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.284120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.284288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.284327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.284519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.284556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.284783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.284837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.284986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.285019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.285169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.285203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.285342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.285376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.285539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.285577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.285683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.285720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.285886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.285939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.286108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.286144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.286323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.286377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.286586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.286650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.286778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.286818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.286972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.287006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.287181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.287229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.287430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.287470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.287599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.287651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.287787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.287824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.287944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.287981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.288174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.288222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.288393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.288429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.288652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.288703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.288946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.289004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.289173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 21:27:10.289208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 21:27:10.289323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.289372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.289515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.289554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.289731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.289771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.289888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.289933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.290115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.290164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.290319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.290367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.290561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.290616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.290768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.290806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.290989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.291024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.291169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.291205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.291310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.291346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.291491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.291526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.291635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.291669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.291799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.291833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.291981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.292029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.292199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.292247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.292359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.292394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.292558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.292611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.292752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.292813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.292913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.292947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.293125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.293162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.293326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.293368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.293555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.293594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.293742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.293780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.293930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.293968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.294100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.294136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.294322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.294361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.294496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.294534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.294676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.294714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.294841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.294877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.295032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.295091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.295269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.295315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.295492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.295533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.295735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.295775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.295918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.295969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.296135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.296170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.296308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.296342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.655 [2024-11-19 21:27:10.296486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.655 [2024-11-19 21:27:10.296523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.655 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.296709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.296770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.296940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.296987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.297130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.297179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.297323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.297374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.297598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.297656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.297845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.297919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.298081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.298135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.298268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.298316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.298512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.298586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.298753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.298812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.298953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.298991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.299166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.299220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.299386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.299446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.299564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.299616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.299741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.299778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.299956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.299991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.300102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.300138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.300296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.300332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.300483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.300551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.300720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.300763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.300913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.300947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.301087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.301123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.301274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.301328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.301453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.301506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.301635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.301669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.301827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.301861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.302022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.302056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.302204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.302239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.302388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.302436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.302563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.302600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.302820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.302882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.303033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.656 [2024-11-19 21:27:10.303083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.656 qpair failed and we were unable to recover it. 00:37:36.656 [2024-11-19 21:27:10.303261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.303308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.303527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.303602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.303797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.303856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.304042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.304094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.304292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.304352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.304558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.304611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.304830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.304888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.305050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.305092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.305243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.305277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.305453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.305512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.305655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.305716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.305973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.306031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.306180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.306228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.306395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.306445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.306669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.306707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.306824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.306863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.307018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.307057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.307206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.307241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.307371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.307405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.307563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.307600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.307768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.307807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.307973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.308007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.308198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.308247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.308372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.308414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.308574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.308613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.308765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.308822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.308958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.308991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.309157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.309206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.309384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.309424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.309609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.309648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.309771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.309810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.309962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.310001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.310187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.310222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.310327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.310379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.310531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.310583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.310775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.310840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.310983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.311021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.657 [2024-11-19 21:27:10.311180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.657 [2024-11-19 21:27:10.311215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.657 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.311332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.311380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.311490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.311525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.311670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.311724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.311842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.311882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.312056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.312103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.312245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.312279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.312420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.312456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.312605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.312639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.312781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.312858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.312983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.313020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.313167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.313202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.313344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.313381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.313537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.313590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.313696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.313731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.313869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.313903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.314053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.314124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.314336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.314390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.314531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.314604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.314769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.314825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.314971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.315009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.315176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.315211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.315365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.315405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.315603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.315655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.315811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.315864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.316016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.316050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.316222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.316276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.316383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.316418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.316577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.316630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.316806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.316858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.317027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.317090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.317294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.317348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.658 qpair failed and we were unable to recover it. 00:37:36.658 [2024-11-19 21:27:10.317540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.658 [2024-11-19 21:27:10.317582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.317723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.317762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.317912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.317950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.318081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.318116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.318243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.318291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.318458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.318514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.318683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.318721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.318838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.318878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.319034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.319081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.319209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.319245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.319421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.319462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.319622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.319695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.319865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.319903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.320050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.320098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.320254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.320303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.320468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.320507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.320618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.320655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.320829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.320866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.321026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.321060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.321185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.321221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.321348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.321385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.321595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.321676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.321933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.321991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.322153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.322187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.322339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.322387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.322555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.322641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.322831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.322893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.323030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.323080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.323243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.323277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.323427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.323464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.323634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.323673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.323862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.323917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.324067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.324130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.324267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.324303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.324469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.324524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.324787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.324839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.324963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.659 [2024-11-19 21:27:10.325002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.659 qpair failed and we were unable to recover it. 00:37:36.659 [2024-11-19 21:27:10.325168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.325202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.325349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.325383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.325536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.325573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.325712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.325749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.325890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.325927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.326039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.326083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.326217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.326251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.326396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.326451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.326602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.326658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.326809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.326847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.326975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.327010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.327160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.327214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.327385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.327434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.327559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.327595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.327736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.327775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.327935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.327969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.328082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.328117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.328233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.328271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.328476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.328530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.328678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.328733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.328872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.328907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.329029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.329090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.329194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.329228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.329343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.329377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.329515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.329549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.329682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.329715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.329815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.329850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.329985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.330020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.330154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.330203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.330371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.330418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.330550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.330591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.330804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.330838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.330952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.330987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.331105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.331150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.331258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.331312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.331487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.331524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.331677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.660 [2024-11-19 21:27:10.331714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.660 qpair failed and we were unable to recover it. 00:37:36.660 [2024-11-19 21:27:10.331893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.331931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.332050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.332094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.332269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.332307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.332451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.332487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.332641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.332678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.332822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.332862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.333051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.333094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.333219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.333253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.333388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.333440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.333586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.333636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.333815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.333867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.334007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.334041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.334217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.334271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.334437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.334478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.334678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.334741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.334917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.334987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.335144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.335179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.335299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.335339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.335522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.335559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.335729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.335767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.335929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.335963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.336078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.336112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.336265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.336313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.336500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.336541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.336722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.336761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.336911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.336950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.337142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.337190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.337321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.337369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.337529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.337568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.337724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.337761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.337935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.337979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.338154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.338189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.338299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.338336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.338512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.338574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.338814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.338875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.339019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.339058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.339226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.339260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.339377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.661 [2024-11-19 21:27:10.339425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.661 qpair failed and we were unable to recover it. 00:37:36.661 [2024-11-19 21:27:10.339679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.339738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.339882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.339934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.340035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.340083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.340233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.340281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.340413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.340456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.340664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.340704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.341014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.341085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.341244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.341279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.341387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.341420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.341575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.341628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.341798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.341863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.342004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.342040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.342227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.342265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.342420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.342462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.342715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.342754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.342979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.343036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.343202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.343239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.343370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.343427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.343580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.343631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.343812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.343882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.344020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.344054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.344286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.344338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.344500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.344552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.344704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.344756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.344865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.344899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.345005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.345041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.345164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.345199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.345357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.345391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.345605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.345666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.345926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.345986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.346128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.346162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.346340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.346378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.346527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.346579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.346778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.346836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.346987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.347021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.347156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.347190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.347369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.347406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.662 [2024-11-19 21:27:10.347572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.662 [2024-11-19 21:27:10.347609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.662 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.347754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.347792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.347920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.347953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.348105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.348154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.348327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.348364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.348485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.348522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.348737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.348790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.348947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.348981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.349119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.349154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.349349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.349389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.349539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.349577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.349709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.349743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.349929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.349967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.350152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.350200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.350370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.350411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.350577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.350616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.350763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.350801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.350934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.350967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.351123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.351171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.351295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.351329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.351454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.351492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.351599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.351637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.351816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.351858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.352037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.352083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.352264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.352298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.352480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.352541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.352683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.352721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.352848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.352885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.353012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.353046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.353191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.353224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.353357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.353426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.353642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.353683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.353805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.353844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.353965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.663 [2024-11-19 21:27:10.354003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.663 qpair failed and we were unable to recover it. 00:37:36.663 [2024-11-19 21:27:10.354191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.354239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.354383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.354420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.354642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.354714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.354939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.354997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.355152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.355187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.355308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.355346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.355560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.355600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.355772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.355811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.355938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.355973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.356137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.356172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.356276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.356331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.356505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.356543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.356661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.356699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.356811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.356850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.357040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.357081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.357240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.357288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.357440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.357492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.357739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.357776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.357941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.357979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.358097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.358150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.358289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.358325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.358462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.358514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.358689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.358726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.358898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.358936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.359085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.359138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.359266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.359314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.359535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.359632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.359819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.359880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.360014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.360082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.360244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.360278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.360403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.360437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.360650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.360715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.360860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.360898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.361019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.361057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.361222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.361256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.361416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.361449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.361600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.361637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.664 [2024-11-19 21:27:10.361785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.664 [2024-11-19 21:27:10.361823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.664 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.361983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.362017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.362187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.362220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.362370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.362407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.362576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.362614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.362788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.362825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.362979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.363016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.363177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.363224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.363364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.363400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.363569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.363621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.363770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.363834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.363969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.364003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.364174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.364210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.364347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.364381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.364516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.364549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.364681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.364714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.364871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.364905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.365039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.365096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.365238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.365279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.365551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.365621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.365837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.365903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.366065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.366109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.366278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.366316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.366430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.366469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.366706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.366767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.366902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.366949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.367087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.367158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.367281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.367329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.367552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.367594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.367846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.367906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.368034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.368074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.368222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.368264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.368411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.368463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.368612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.368676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.368846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.368882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.368994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.369030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.369171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.369206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.369340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.369393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.665 [2024-11-19 21:27:10.369548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.665 [2024-11-19 21:27:10.369598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.665 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.369830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.369886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.370028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.370065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.370248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.370296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.370519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.370589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.370764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.370824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.370953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.370987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.371153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.371188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.371357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.371412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.371565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.371604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.371786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.371825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.371977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.372014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.372169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.372204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.372339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.372378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.372488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.372523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.372691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.372724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.372859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.372927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.373093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.373126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.373263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.373297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.373468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.373502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.373636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.373675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.373868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.373931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.374107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.374142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.374248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.374283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.374474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.374547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.374756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.374816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.374950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.375008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.375155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.375189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.375327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.375361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.375544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.375601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.375788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.375838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.375980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.376018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.376208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.376244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.376408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.376468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.376725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.376767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.376944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.376983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.377095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.377149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.377286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.666 [2024-11-19 21:27:10.377321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.666 qpair failed and we were unable to recover it. 00:37:36.666 [2024-11-19 21:27:10.377491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.377526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.377678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.377728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.377905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.377944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.378077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.378130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.378260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.378294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.378397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.378431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.378583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.378620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.378800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.378838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.378990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.379025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.379172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.379207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.379344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.379378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.379567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.379604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.379728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.379779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.379928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.379980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.380152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.380201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.380349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.380405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.380633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.380673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.380795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.380833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.380956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.381007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.381154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.381189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.381306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.381340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.381448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.381499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.381652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.381691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.381856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.381907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.382034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.382078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.382203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.382237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.382368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.382401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.382535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.382570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.382677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.382710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.382873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.382906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.383015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.383051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.383164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.383200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.383344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.383393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.383571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.383609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.383781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.383834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.383986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.384039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.384163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.384201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.667 [2024-11-19 21:27:10.384376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.667 [2024-11-19 21:27:10.384410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.667 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.384585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.384646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.384893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.384931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.385111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.385161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.385288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.385322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.385459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.385492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.385715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.385752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.385897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.385934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.386094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.386146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.386248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.386282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.386462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.386506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.386653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.386690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.386850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.386888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.387035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.387081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.387201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.387234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.387422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.387486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.387668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.387704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.387847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.387886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.388046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.388087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.388244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.388279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.388469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.388535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.388710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.388748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.388949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.388988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.389181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.389217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.389359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.389412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.389581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.389618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.389877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.389930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.390086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.390140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.390245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.390279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.390424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.390501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.390741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.390817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.391004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.391042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.391184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.391218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.668 [2024-11-19 21:27:10.391326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.668 [2024-11-19 21:27:10.391379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.668 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.391504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.391537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.391666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.391700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.391827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.391878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.392041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.392085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.392211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.392249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.392425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.392462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.392609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.392647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.392757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.392795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.392935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.392986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.393118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.393152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.393280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:37:36.669 [2024-11-19 21:27:10.393480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.393533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.393764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.393805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.393956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.393994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.394159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.394195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.394345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.394393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.394511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.394548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.394719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.394759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.394906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.394945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.395106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.395141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.395269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.395303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.395452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.395490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.395632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.395670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.395819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.395857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.396026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.396088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.396250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.396299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.396471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.396510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.396685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.396724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.396867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.396905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.397097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.397150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.397329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.397365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.397551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.397590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.397844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.397883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.398019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.398054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.398185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.398220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.398335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.398391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.398540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.398632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.398804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.669 [2024-11-19 21:27:10.398870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.669 qpair failed and we were unable to recover it. 00:37:36.669 [2024-11-19 21:27:10.399021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.399059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.399227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.399262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.399462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.399514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.399738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.399800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.399935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.399988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.400124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.400159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.400296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.400331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.400529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.400567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.400717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.400756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.400927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.400964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.401092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.401127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.401236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.401271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.401406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.401443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.401590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.401627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.401799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.401837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.401995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.402028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.402163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.402197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.402389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.402427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.402616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.402686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.403766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.403816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.403996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.404036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.404207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.404242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.404374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.404408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.404562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.404600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.404750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.404787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.404955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.404992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.405104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.405162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.405296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.405329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.405436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.405470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.405594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.405633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.405810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.405847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.405992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.406046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.406207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.406256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.406427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.406467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.406635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.406673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.406859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.406898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.407009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.407047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.670 [2024-11-19 21:27:10.407183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.670 [2024-11-19 21:27:10.407218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.670 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.407352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.407387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.407551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.407589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.407768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.407808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.407977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.408012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.408173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.408207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.408329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.408363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.408470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.408504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.408679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.408712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.408911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.408949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.409118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.409153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.409262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.409295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.409539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.409577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.409697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.409734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.409904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.409941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.410131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.410179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.410313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.410362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.410542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.410609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.410777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.410830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.411024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.411059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.411193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.411226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.411337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.411370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.411548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.411586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.411729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.411772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.411922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.411959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.412122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.412170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.412357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.412411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.412580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.412621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.412823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.412893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.413026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.413061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.413199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.413234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.413370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.413425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.413559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.413611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.413759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.413796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.413942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.413979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.414147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.414180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.414312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.414361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.414478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.414516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.671 [2024-11-19 21:27:10.414635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.671 [2024-11-19 21:27:10.414685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.671 qpair failed and we were unable to recover it. 00:37:36.672 [2024-11-19 21:27:10.414848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.672 [2024-11-19 21:27:10.414900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.672 qpair failed and we were unable to recover it. 00:37:36.672 [2024-11-19 21:27:10.415083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.672 [2024-11-19 21:27:10.415151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.672 qpair failed and we were unable to recover it. 00:37:36.672 [2024-11-19 21:27:10.415289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.672 [2024-11-19 21:27:10.415337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.672 qpair failed and we were unable to recover it. 00:37:36.672 [2024-11-19 21:27:10.415571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 21:27:10.415625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 21:27:10.415820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 21:27:10.415879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 21:27:10.416009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 21:27:10.416046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 21:27:10.416203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 21:27:10.416237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 21:27:10.416385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 21:27:10.416422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 21:27:10.416623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 21:27:10.416682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 21:27:10.416813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 21:27:10.416852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 21:27:10.416978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 21:27:10.417037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 21:27:10.417194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 21:27:10.417233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 21:27:10.417344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 21:27:10.417399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 21:27:10.417554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 21:27:10.417592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 21:27:10.417712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 21:27:10.417751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 21:27:10.417917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 21:27:10.417955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 21:27:10.418120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 21:27:10.418158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 21:27:10.418322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 21:27:10.418365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 21:27:10.418608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 21:27:10.418668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 21:27:10.418808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 21:27:10.418879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 21:27:10.419085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 21:27:10.419135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 21:27:10.419248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 21:27:10.419284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.419433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.419481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.419700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.419760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.419942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.420006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.420116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.420151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.420305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.420359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.420518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.420571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.420724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.420777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.420963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.421001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.421141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.421189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.421302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.421356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.421651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.421714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.421922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.421960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.422157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.422191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.422327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.422378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.422528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.422566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.422702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.422752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.422944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.422992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.423141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.423198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.423355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.423409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.423590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.423643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.423749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.423783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.423927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.423975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.424123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.424161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.424287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.424335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.424493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.424549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.424698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.424736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.424932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.424965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.425101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.425149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.425340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.425398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.425577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.425640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 21:27:10.425750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 21:27:10.425785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.425950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.425985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.426170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.426225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.426363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.426402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.426589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.426650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.426838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.426900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.427040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.427085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.427262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.427295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.427537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.427596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.427709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.427746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.427927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.427965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.428094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.428131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.428288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.428342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.428508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.428551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.428750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.428811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.428976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.429012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.429155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.429191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.429370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.429409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.429627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.429665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.429826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.429879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.430026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.430080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.430238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.430273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.430390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.430443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.430640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.430711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.430863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.430902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.431041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.431084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.431225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.431259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.431396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.431430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.431583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.431621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.431793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.431830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.431984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.432018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.432169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 21:27:10.432205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 21:27:10.432376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.432430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.432701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.432762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.433020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.433061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.433224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.433259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.433411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.433459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.433613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.433667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.433809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.433882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.434019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.434055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.434229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.434277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.434481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.434521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.434737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.434799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.434964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.434998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.435108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.435143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.435280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.435314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.435412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.435446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.435639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.435701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.435907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.435945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.436065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.436124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.436237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.436271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.436404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.436438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.436555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.436598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.436784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.436821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.436952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.436985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.437126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.437160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.437331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.437384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.437614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.437679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.437841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.437896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.438059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.438100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.438209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.438244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 21:27:10.438400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 21:27:10.438452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.438603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.438642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.438890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.438949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.439062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.439103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.439252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.439288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.439428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.439463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.439565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.439599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.439756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.439827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.439996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.440034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.440172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.440207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.440368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.440405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.440517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.440554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.440668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.440706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.440833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.440870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.441015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.441052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.441215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.441264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.441403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.441457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.441609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.441663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.441796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.441849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.441984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.442018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.442141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.442176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.442272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.442306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.442454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.442491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.442641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.442677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.442801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.442840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.442951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.442989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.443152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.443185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.443288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.443324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.443451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.443490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.443693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 21:27:10.443745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 21:27:10.443848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.443882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.444010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.444063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.444252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.444293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.444441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.444480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.444624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.444701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.444871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.444932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.445054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.445103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.445231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.445267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.445431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.445490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.445642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.445695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.445807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.445842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.445999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.446047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.446190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.446237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.446361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.446418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.446592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.446630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.446821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.446874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.447052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.447118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.447259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.447294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.447409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.447461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.447588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.447640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.447768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.447806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.447970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.448005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.448150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.448198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.448316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.448351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.448477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.448515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.448639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.448677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.448906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.448979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.449103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.449153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.449269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 21:27:10.449305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 21:27:10.449492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.449531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.449736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.449788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.449905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.449944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.450123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.450160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.450311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.450379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.450601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.450648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.450835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.450873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.450988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.451039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.451164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.451202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.451334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.451375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.451555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.451593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.451747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.451785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.451926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.451971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.452146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.452195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.452321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.452374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.452552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.452623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.452847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.452907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.453053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.453129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.453234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.453268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.453411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.453445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.453609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.453647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.453779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.453820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.453977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.454016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.454163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.454198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.454349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.454397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.454636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.454698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.454880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.454945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.455144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 21:27:10.455178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 21:27:10.455281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.455315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.455486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.455523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.455640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.455677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.455880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.455919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.456075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.456129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.456260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.456294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.456424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.456461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.456624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.456661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.456793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.456846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.456996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.457044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.457186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.457223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.457384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.457423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.457555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.457606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.457727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.457764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.457891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.457929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.458076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.458156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.458298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.458345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.458502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.458557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.458687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.458741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.458873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.458908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.459056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.459111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.459259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.459342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.459497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.459531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.459678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.459712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.459829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.459869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.459980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.460015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.460172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.460211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.460376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.460411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.460538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.460572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.460684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.460718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.460829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.460863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.460969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.461003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.461121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.461156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 21:27:10.461304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 21:27:10.461350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.461495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.461533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.461647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.461682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.461813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.461851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.461998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.462046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.462260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.462303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.462423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.462462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.462578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.462616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.462800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.462861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.462992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.463029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.463171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.463207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.463334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.463387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.463515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.463585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.463783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.463849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.463988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.464024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.464151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.464186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.464322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.464356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.464517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.464550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.464766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.464842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.464980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.465018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.465158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.465196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.465348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.465382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.465479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.465513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.465622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.465656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.465778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.465813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.465925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.465972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.466109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.466158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.466348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.466389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.466517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.466574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.466717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.466756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.466886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.466925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.467100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.467154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.467314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.467371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 21:27:10.467512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 21:27:10.467568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.467693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.467728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.467846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.467882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.468017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.468051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.468172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.468206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.468308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.468343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.468450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.468484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.468598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.468633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.468738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.468780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.468927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.468964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.469103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.469138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.469251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.469286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.469444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.469479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.469635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.469669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.469802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.469836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.469970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.470005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.470131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.470166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.470275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.470308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.470439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.470477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.470657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.470726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.470859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.470896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.471055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.471099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.471235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.471288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.471454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.471505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.471658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.471712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.471850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.471885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.472004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.472038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.472194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.472230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.472367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.472400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.472535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.472568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.472693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 21:27:10.472732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 21:27:10.472853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.472890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.473035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.473078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.473251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.473305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.473411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.473445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.473566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.473603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.473757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.473791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.473915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.473963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.474086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.474152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.474287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.474325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.474505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.474542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.474688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.474751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.474891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.474930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.475054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.475098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.475226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.475273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.475434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.475475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.475628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.475666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.475874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.475942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.476083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.476142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.476260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.476308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.476474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.476510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.476653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.476696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.476836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.476875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.477035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.477077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.477217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.477252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.477395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.477429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.477646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.477684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.477805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.477845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.477983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.478018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.478141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.478176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.478314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.478366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 21:27:10.478513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 21:27:10.478551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.478706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.478746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.478890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.478928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.479078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.479138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.479250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.479295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.479456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.479495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.479625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.479667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.479796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.479850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.479970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.480023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.480181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.480217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.480378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.480441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.480573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.480611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.480736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.480776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.480924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.480971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.481109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.481156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.481268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.481304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.481431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.481469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.481609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.481664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.481798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.481848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.481994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.482029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.482157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.482204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.482392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.482446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.482602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.482641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.482790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.482843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.482986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.483019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.483145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 21:27:10.483179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 21:27:10.483286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.483320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.483437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.483470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.483631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.483669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.483792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.483842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.483970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.484010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.484177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.484225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.484361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.484409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.484545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.484602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.484759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.484814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.484949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.484982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.485111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.485146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.485269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.485305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.485458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.485498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.485707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.485763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.485901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.485936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.486042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.486085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.486230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.486264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.486370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.486403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.486601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.486645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.486814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.486851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.486993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.487030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.487187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.487235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.487369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.487437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.487643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.487711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.487885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.487942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.488051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.488115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.488227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.488260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.488382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.488436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.488615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.488669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.488917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.488990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.489154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 21:27:10.489189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 21:27:10.489368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.489445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.489679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.489725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.489907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.489946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.490162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.490197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.490355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.490390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.490522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.490557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.490671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.490706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.490841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.490886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.491036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.491081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.491244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.491279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.491412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.491461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.491592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.491632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.491806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.491858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.491994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.492029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.492199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.492248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.492388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.492427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.492535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.492572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.492695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.492733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.492900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.492959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.493094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.493152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.493352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.493406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.493566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.493621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.493730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.493765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.493879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.493913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.494063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.494110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.494215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.494249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.494382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.494420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.494606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.494671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.494791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.494828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.494967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.495004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.495213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.495250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 21:27:10.495361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 21:27:10.495395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.495528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.495581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.495684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.495718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.495891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.495925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.496062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.496110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.496264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.496303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.496416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.496454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.496615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.496652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.496795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.496833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.496965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.497003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.497144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.497179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.497353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.497391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.497530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.497567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.497709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.497747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.497910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.497948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.498086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.498125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.498251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.498287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.498466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.498530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.498678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.498729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.498838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.498873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.499024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.499060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.499229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.499282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.499466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.499529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.499723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.499784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.499949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.499984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.500109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.500158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.500322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.500378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.500575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.500644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.500819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.500874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.501023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.501061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.501213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.501247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 21:27:10.501382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 21:27:10.501433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.501600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.501660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.501791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.501843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.501991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.502030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.502185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.502219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.502364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.502438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.502601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.502657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.502780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.502833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.502963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.502998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.503132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.503167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.503325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.503359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.503526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.503561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.503702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.503737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.503848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.503882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.504024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.504058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.504166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.504200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.504315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.504367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.504560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.504598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.504711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.504749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.504880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.504917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.505090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.505146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.505315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.505364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.505562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.505616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.505768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.505825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.505950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.505985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.506156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.506191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.506307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.506342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.506469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.506503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.506649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.506687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 21:27:10.506857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 21:27:10.506907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.507034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.507067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.507206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.507239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.507366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.507403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.507524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.507575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.507701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.507739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.507885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.507923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.508079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.508113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.508238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.508273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.508441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.508496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.508726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.508794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.508906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.508941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.509103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.509139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.509296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.509352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.509485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.509537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.509682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.509718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.509859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.509902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.510036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.510078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.510197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.510233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.510362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.510397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.510512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.510547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.510691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.510727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.510842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.510876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.510977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.511011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.511183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.511222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.511334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.511372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.511520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.511557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.511699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.511736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.511863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.511899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.512037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 21:27:10.512081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 21:27:10.512222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.512260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.512447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.512500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.512620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.512658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.512805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.512840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.512954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.512989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.513100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.513135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.513276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.513330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.513498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.513554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.513682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.513734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.513879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.513918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.514081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.514116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.514258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.514295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.514480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.514519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.514691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.514729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.514871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.514924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.515086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.515138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.515304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.515353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.515538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.515610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.515754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.515792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.515919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.515957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.516099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.516134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.516284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.516323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.516471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.516511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.516730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.516768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.516892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.516957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 21:27:10.517098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 21:27:10.517135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.517279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.517319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.517496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.517535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.517651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.517689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.517888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.517926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.518045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.518093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.518245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.518293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.518443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.518479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.518655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.518717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.518839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.518898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.519031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.519065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.519197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.519232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.519343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.519397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.519520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.519571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.519725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.519763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.519926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.519964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.520105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.520140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.520275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.520309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.520440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.520476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.520650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.520689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.520851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.520903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.521043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.521098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.521281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.521315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.521424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.521477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.521589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.521627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.521755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.521805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.521978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.522016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.522203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.522252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.522415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.522463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.522634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.522694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.522917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.522971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.523084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.523119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.523305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 21:27:10.523359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 21:27:10.523510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.523548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.523697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.523750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.523898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.523933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.524079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.524115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.524213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.524247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.524441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.524506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.524656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.524721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.524901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.524974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.525157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.525200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.525346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.525413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.525575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.525630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.525787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.525847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.526024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.526061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.526204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.526238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.526421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.526480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.526625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.526663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.526846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.526885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.527080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.527136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.527306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.527342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.527522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.527575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.527729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.527788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.527900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.527936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.528084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.528119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.528286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.528325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.528440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.528478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.528666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.528729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.528871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.528927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.529047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.529106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.529242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.529275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.529430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 21:27:10.529468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 21:27:10.529573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.529611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.529747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.529782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.529960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.529996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.530133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.530181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.530321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.530370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.530532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.530570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.530697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.530749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.530872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.530909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.531035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.531078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.531241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.531274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.531407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.531444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.531580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.531617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.531817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.531855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.532003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.532041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.532198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.532231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.532337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.532389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.532539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.532576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.532754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.532791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.532981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.533035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.533206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.533254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.533393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.533434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.533574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.533643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.533816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.533854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.533957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.533995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.534140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.534176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.534298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.534351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.534531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.534585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.534758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.534812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.534963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.535001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.535152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.535186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.535317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.535372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 21:27:10.535534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 21:27:10.535571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.535701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.535754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.535945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.535984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.536153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.536208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.536409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.536475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.536643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.536695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.536826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.536882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.536995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.537030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.537220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.537273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.537378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.537413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.537533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.537589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.537758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.537824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.537952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.537990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.538181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.538219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.538377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.538431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.538562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.538602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.538794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.538858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.538986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.539026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.539168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.539221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.539330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.539365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.539529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.539567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.539745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.539783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.539928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.539966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.540105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.540139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.540267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.540315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.540554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.540616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.540797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.540868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.541013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.541057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.541207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.541241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.541403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.541465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.541615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.541671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 21:27:10.541793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 21:27:10.541831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.541979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.542026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.542193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.542231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.542374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.542428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.542529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.542563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.542712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.542765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.542904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.542939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.543087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.543122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.543236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.543270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.543396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.543430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.543567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.543601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.543751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.543806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.543908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.543943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.544125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.544180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.544303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.544342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.544513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.544566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.544691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.544742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.544855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.544891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.545020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.545074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.545200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.545238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.545399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.545438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.545555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.545594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.545751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.545789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.545954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.545990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.546133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.546180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.546371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.546432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.546588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.546650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 21:27:10.546835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 21:27:10.546897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.547054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.547117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.547252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.547289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.547422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.547461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.547600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.547639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.547765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.547805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.547958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.547993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.548132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.548167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.548295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.548329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.548485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.548547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.548722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.548760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.548915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.548954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.549107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.549169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.549310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.549378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.549503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.549542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.549732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.549790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.549911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.549949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.550103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.550158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.550298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.550333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.550504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.550543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.550687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.550725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.550852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.550891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.551027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.551062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.551233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.551281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.551413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.551453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.551632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.551671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.551809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.551847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.551989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.552035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.552213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.552247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.552419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.552483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.552610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.552671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 21:27:10.552846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 21:27:10.552883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.553002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.553043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.553201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.553249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.553401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.553449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.553606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.553662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.553780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.553834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.553978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.554016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.554158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.554194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.554331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.554364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.554519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.554557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.554681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.554719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.554846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.554906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.555054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.555119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.555246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.555280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.555446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.555483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.555597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.555634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.555756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.555794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.555945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.555992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.556121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.556175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.556297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.556333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.556473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.556511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.556715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.556753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.556867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.556905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.557024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.557063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.557215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.557268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.557417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.557455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.557567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.557606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.557752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.557792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.557920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.557958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 21:27:10.558126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 21:27:10.558181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.558328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.558396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.558528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.558576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.558735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.558795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.558911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.558948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.559119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.559154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.559267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.559303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.559468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.559535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.559752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.559810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.559929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.559968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.560151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.560187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.560301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.560336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.560472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.560526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.560647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.560685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.560865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.560903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.561042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.561104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.561277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.561326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.561510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.561550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.561695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.561757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.561901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.561939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.562090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.562144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.562308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.562344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.562454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.562500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.562610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.562643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.562775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.562812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.562959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.562997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.563140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.563174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.563307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.563348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.563519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.563564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.563728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.563772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.563905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.563959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.564128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.564163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.564274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 21:27:10.564307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 21:27:10.564462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.564515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.564667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.564715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.564860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.564897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.565060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.565114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.565221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.565255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.565384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.565421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.565595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.565632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.565782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.565820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.565966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.566004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.566194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.566243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.566403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.566456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.566606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.566645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.566788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.566826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.567017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.567055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.567204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.567253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.567464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.567505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.567675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.567714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.567829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.567868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.567980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.568018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.568153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.568187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.568335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.568396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.568526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.568580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.568754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.568792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.568938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.568973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.569113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.569148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.569256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.569311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.569459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.569498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.569613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.569651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.569833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.569872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.570008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.570043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.570207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.570241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.570373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.570421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 21:27:10.570585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 21:27:10.570639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.570764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.570819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.570927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.570963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.571141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.571200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.571383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.571434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.571561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.571600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.571760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.571797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.571933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.571967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.572081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.572117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.572266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.572303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.572459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.572496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.572606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.572642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.572753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.572789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.572904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.572942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.573073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.573136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.573254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.573289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.573467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.573518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.573697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.573754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.573888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.573926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.574055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.574102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.574254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.574290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.574443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.574481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.574601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.574654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.574787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.574824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.574963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.574998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.575138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.575179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.575329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.575395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.575567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.575606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.575726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.575763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.575899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.575935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.576099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 21:27:10.576152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 21:27:10.576262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.576301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.576428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.576464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.576569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.576616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.576743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.576778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.576886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.576922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.577046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.577123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.577305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.577353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.577494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.577548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.577697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.577749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.577864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.577899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.578040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.578086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.578205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.578240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.578362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.578401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.578506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.578541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.578680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.578714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.578852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.578886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.579003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.579052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.579230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.579267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.579392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.579428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.579581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.579633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.579746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.579780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.579895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.579929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.580063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.580104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.580215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.580250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.580365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.580399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 21:27:10.580586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 21:27:10.580635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.580776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.580811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.580958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.580994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.581106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.581146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.581284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.581318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.581429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.581464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.581574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.581610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.581764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.581816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.581958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.581993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.582108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.582142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.582257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.582291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.582466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.582514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.582654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.582690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.582829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.582863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.582992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.583026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.583198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.583241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.583351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.583387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.583493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.583528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.583638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.583674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.583844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.583879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.584014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.584050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.584189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.584237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.584434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.584483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.584624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.584661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.584898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.584934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.585053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.585098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.585238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.585273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.585442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.585490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.585635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.585672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.585788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.585825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.585934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.585967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.586106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 21:27:10.586145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 21:27:10.586281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.586316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.586456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.586489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.586594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.586629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.586738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.586772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.586905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.586939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.587060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.587122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.587245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.587281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.587420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.587467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.587612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.587647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.587758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.587792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.587933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.587967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.588082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.588118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.588234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.588268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.588378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.588411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.588546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.588581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.588712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.588745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.588848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.588882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.588987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.589021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.589166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.589214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.589329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.589365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.589482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.589517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.589656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.589691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.589846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.589880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.589987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.590025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.590153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.590189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.590335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.590385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.590501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.590537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.590653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.590687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.590852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.590886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.590987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.591021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.591147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.591182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.591319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.591353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.591466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.591500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 21:27:10.591620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 21:27:10.591653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.591789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.591825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.591941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.591975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.592109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.592144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.592283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.592318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.592434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.592468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.592584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.592618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.592749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.592785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.592889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.592923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.593062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.593104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.593220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.593254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.593356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.593390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.593503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.593537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.593663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.593698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.593817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.593866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.594036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.594080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.594202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.594237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.594358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.594393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.594498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.594532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.594643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.594677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.594819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.594855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.595015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.595078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.595243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.595278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.595435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.595469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.595602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.595636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.595746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.595780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.595918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.595953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.596135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.596170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.596283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.596317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.596488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.596523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 21:27:10.596660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 21:27:10.596700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.596812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.596846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.596981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.597028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.597175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.597223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.597343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.597379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.597489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.597524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.597654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.597689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.597798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.597844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.597971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.598006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.598128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.598166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.598323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.598358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.598500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.598534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.598687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.598722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.598823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.598857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.598998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.599033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.599181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.599218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.599405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.599457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.599606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.599660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.599814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.599866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.599998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.600037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.600179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.600227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.600340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.600375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.600529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.600577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.600701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.600737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.600870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.600903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.601034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.601079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.601218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.601252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.601381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.601419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.601560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.601595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.601736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.601771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 21:27:10.601907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 21:27:10.601941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.602076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.602132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.602287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.602327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.602448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.602484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.602686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.602723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.602831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.602878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.602986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.603020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.603172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.603210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.603330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.603378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.603542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.603590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.603711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.603754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.603885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.603920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.604029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.604064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.604207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.604240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.604368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.604408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.604551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.604585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.604714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.604749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.604891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.604938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.605083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.605126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.605258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.605292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.605432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.605466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.605573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.605606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.605713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.605748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.605852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.605888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.606021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.606084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.606227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.606264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.606367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.606402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.606535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.606571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 21:27:10.606682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 21:27:10.606718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.606856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.606892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.607030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.607067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.607237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.607286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.607446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.607482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.607588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.607624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.607766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.607802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.607942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.607978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.608086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.608120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.608235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.608271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.608376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.608411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.608542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.608578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.608710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.608756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.608870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.608906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.609022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.609060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.609203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.609252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.609447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.609497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.609615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.609652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.609752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.609787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.609894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.609929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.610044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.610092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.610210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.610245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.610354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.610394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.610509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.610544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.610688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.610723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.610873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.610914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.611058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.611105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.611225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.611261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.611370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.611406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.611517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.611552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.611701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.611736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.611848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.611884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 21:27:10.612049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 21:27:10.612108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.612242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.612291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.612433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.612469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.612601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.612636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.612755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.612790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.612890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.612926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.613101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.613152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.613269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.613306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.613448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.613483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.613591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.613626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.613791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.613826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.613930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.613965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.614084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.614120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.614258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.614293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.614405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.614440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.614577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.614611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.614747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.614781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.614899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.614939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.615088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.615125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.615251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.615301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.615414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.615450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.615571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.615606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.615718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.615754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.615889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.615924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.616086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.616125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.616243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.616279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.616411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.616446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.616580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.616615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.616724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.616759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.616867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.616907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.617056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.617120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.617240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.617278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.617382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.617417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.617536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 21:27:10.617572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 21:27:10.617681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.617717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.617822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.617858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.617966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.618001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.618128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.618162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.618282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.618318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.618428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.618463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.618599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.618634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.618752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.618789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.618926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.618962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.619096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.619132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.619267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.619303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.619444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.619479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.619608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.619643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.619755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.619792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.619940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.619975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.620139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.620174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.620291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.620326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.620464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.620500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.620615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.620650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.620766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.620803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.620938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.620979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.621091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.621127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.621261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.621297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.621422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.621472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.621585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.621622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.621758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.621794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.621902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.621937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.622048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.622105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.622261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.622299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.622437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.622473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.622613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.622649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.622803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.622838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 21:27:10.622946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 21:27:10.622982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.623121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.623157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.623262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.623297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.623400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.623435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.623597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.623637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.623738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.623773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.623914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.623951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.624062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.624106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.624275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.624311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.624421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.624455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.624591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.624627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.624766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.624802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.624921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.624957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.625084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.625122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.625254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.625289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.625423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.625470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.625610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.625646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.625780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.625830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.625956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.625992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.626184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.626234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.626350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.626387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.626492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.626527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.626639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.626675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.626818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.626854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.626981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.627031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.627157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.627193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.627327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.627362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.627503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.627539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.627697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.627733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.627840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.627877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.628016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.628051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.628228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.628269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.628372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.628419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.628542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.995 [2024-11-19 21:27:10.628590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.995 qpair failed and we were unable to recover it. 00:37:36.995 [2024-11-19 21:27:10.628708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.628751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.628875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.628911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.629052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.629099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.629228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.629265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.629419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.629458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.629606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.629642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.629755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.629790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.629903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.629939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.630096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.630132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.630272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.630307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.630445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.630486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.630599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.630636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.630755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.630793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.630951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.631001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.631161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.631211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.631324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.631361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.631503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.631539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.631676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.631711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.631818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.631854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.631983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.632033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.632161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.632200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.632338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.632374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.632511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.632546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.632659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.632694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.632841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.632877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.632989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.633024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.633170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.633205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.633342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.633377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.633491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.633526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.633632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.633668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.633839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.633875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.634004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.996 [2024-11-19 21:27:10.634040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.996 qpair failed and we were unable to recover it. 00:37:36.996 [2024-11-19 21:27:10.634178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.634228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.634350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.634386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.634502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.634537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.634643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.634678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.634788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.634823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.634979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.635028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.635151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.635188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.635305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.635340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.635506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.635541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.635668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.635703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.635836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.635871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.636015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.636052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.636228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.636278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.636422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.636461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.636563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.636599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.636745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.636781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.636915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.636950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.637086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.637135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.637281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.637326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.637437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.637473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.637580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.637615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.637721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.637756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.637872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.637908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.638045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.638088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.638212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.638261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.638418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.638472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.638623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.638658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.638799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.638834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.997 qpair failed and we were unable to recover it. 00:37:36.997 [2024-11-19 21:27:10.638977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.997 [2024-11-19 21:27:10.639013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.639150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.639185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.639292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.639328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.639471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.639506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.639617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.639652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.639752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.639789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.639946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.639996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.640147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.640185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.640365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.640401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.640516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.640552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.640715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.640749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.640856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.640892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.640992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.641025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.641177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.641212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.641321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.641355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.641463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.641499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.641635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.641676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.641788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.641824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.641942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.641992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.642140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.642175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.642291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.642326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.642435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.642470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.642574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.642609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.642736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.642771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.642886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.642920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.643057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.643100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.643209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.643244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.643377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.643412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.643520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.643555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.643661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.643695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.643821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.643875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.644013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.644062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.644244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.644282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.998 qpair failed and we were unable to recover it. 00:37:36.998 [2024-11-19 21:27:10.644419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.998 [2024-11-19 21:27:10.644455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.644563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.644601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.644729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.644778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.644947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.644983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.645121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.645156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.645268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.645302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.645437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.645472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.645588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.645628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.645746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.645782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.645894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.645934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.646049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.646103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.646227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.646263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.646362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.646396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.646545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.646581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.646697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.646735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.646862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.646899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.647019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.647055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.647169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.647204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.647322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.647356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.647495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.647530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.647664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.647699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.647840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.647877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.647976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.648013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.648138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.648175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.648319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.648355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.648487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.648522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.648628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.648662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.648795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.648829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.648948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.648985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.649137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.649187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.649322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.649360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.649481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.649517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.649653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.649689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.649814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.649850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.649989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.650025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.650181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.650217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.650321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.650365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.999 qpair failed and we were unable to recover it. 00:37:36.999 [2024-11-19 21:27:10.650496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.999 [2024-11-19 21:27:10.650540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.650696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.650732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.650836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.650872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.651006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.651041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.651181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.651216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.651350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.651386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.651518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.651553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.651688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.651723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.651865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.651900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.652073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.652121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.652235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.652269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.652414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.652449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.652559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.652595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.652711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.652746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.652884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.652919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.653027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.653063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.653187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.653222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.653379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.653416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.653528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.653595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.653707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.653743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.653883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.653919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.654034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.654077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.654238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.654287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.654406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.654445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.654575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.654620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.654767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.654806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.654930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.654966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.655084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.655129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.655268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.655304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.655414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.655450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.655583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.655618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.655755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.655790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.655907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.000 [2024-11-19 21:27:10.655947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.000 qpair failed and we were unable to recover it. 00:37:37.000 [2024-11-19 21:27:10.656088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.656132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.656238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.656274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.656451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.656487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.656643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.656677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.656788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.656822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.656944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.656983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.657120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.657168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.657339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.657387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.657566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.657603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.657735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.657771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.657879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.657915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.658097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.658149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.658273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.658310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.658455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.658490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.658625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.658660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.658801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.658836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.658963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.658998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.659137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.659171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.659278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.659316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.659453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.659488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.659604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.659641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.659751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.659785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.659917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.659952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.660078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.660128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.660264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.660300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.660471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.660507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.660640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.660675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.660795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.660830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.660964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.660999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.661156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.661192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.661349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.661384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.661517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.661552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.661715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.661751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.661867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.661901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.001 qpair failed and we were unable to recover it. 00:37:37.001 [2024-11-19 21:27:10.662045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.001 [2024-11-19 21:27:10.662093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.662205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.662241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.662345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.662381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.662528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.662563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.662675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.662710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.662812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.662846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.662986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.663020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.663150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.663189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.663297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.663333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.663440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.663478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.663590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.663626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.663752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.663787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.663891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.663926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.664047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.664090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.664212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.664260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.664422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.664477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.664627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.664665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.664769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.664805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.664942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.664979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.665095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.665134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.665257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.665293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.665421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.665466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.665646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.665685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.665801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.665837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.665993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.666033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.666193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.666230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.666361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.666410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.666536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.666573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.666690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.666726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.666843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.666878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.667028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.667089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.667229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.667267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.667390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.667428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.667568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.667604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.002 qpair failed and we were unable to recover it. 00:37:37.002 [2024-11-19 21:27:10.667713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.002 [2024-11-19 21:27:10.667764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.667886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.667922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.668052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.668116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.668236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.668273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.668391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.668427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.668576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.668612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.668751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.668795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.668939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.668976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.669122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.669158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.669299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.669359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.669536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.669590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.669734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.669768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.669909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.669945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.670093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.670129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.670256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.670306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.670454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.670491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.670632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.670668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.670774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.670810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.670928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.670965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.671109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.671158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.671279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.671314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.671431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.671466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.671615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.671650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.671775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.671809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.671949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.671986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.672136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.672187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.672311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.672351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.672507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.672545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.672680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.672717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.672823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.672858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.672984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.673034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.673169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.673205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.673316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.673358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.673507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.673543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.673657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.673694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.673799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.673835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.003 [2024-11-19 21:27:10.673972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.003 [2024-11-19 21:27:10.674007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.003 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.674153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.674203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.674358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.674395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.674562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.674601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.674711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.674747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.674856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.674891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.674995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.675030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.675184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.675220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.675364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.675419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.675560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.675598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.675734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.675775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.675901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.675942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.676059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.676112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.676212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.676246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.676362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.676417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.676552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.676588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.676701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.676736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.676850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.676885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.676990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.677026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.677163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.677199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.677307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.677351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.677519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.677555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.677663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.677698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.677834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.677870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.677992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.678041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.678213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.678262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.678390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.678426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.678544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.678590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.678702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.678737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.678840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.678876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.679012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.679047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.679178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.679217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.679344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.679380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.679561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.679611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.679758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.679795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.679904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.679940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.680076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.680123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.680289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.680344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.680478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.680513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.004 qpair failed and we were unable to recover it. 00:37:37.004 [2024-11-19 21:27:10.680674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.004 [2024-11-19 21:27:10.680719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.680885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.680924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.681055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.681127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.681296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.681362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.681482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.681519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.681665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.681700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.681865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.681900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.682035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.682075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.682198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.682232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.682351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.682387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.682491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.682527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.682658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.682698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.682839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.682882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.683004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.683041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.683191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.683240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.683365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.683410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.683546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.683601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.683740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.683783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.683914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.683966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.684113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.684148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.684256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.684291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.684403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.684439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.684579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.684614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.684776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.684811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.684924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.684962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.685085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.685133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.685282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.685331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.685456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.685492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.685631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.685665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.685805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.685839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.685989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.686024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.686175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.686224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.686351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.686403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.686519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.686556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.686701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.686738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.686851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.686887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.687004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.687044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.687172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.687207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.687355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.687394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.005 [2024-11-19 21:27:10.687541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.005 [2024-11-19 21:27:10.687578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.005 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.687729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.687765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.687877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.687910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.688028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.688066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.688208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.688256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.688414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.688450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.688601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.688636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.688775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.688810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.688947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.688982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.689101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.689146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.689256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.689293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.689472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.689509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.689645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.689686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.689801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.689838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.689986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.690026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.690176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.690212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.690334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.690370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.690480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.690516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.690624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.690659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.690792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.690826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.690941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.690977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.691133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.691184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.691303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.691349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.691467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.691502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.691613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.691648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.691780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.691815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.691928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.691966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.692109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.692146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.692267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.692307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.692425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.692461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.692572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.692608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.692743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.692777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.692890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.692926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.006 [2024-11-19 21:27:10.693033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.006 [2024-11-19 21:27:10.693075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.006 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.693195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.693230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.693346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.693382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.693518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.693552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.693660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.693705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.693840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.693876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.694016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.694065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.694225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.694261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.694404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.694439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.694548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.694582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.694717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.694752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.694916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.694951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.695087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.695130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.695265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.695299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.695422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.695457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.695601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.695636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.695775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.695810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.695926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.695961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.696067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.696120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.696264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.696304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.696440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.696476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.696610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.696644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.696803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.696838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.696959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.697009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.697173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.697222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.697360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.697409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.697529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.697566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.697704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.697739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.697843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.697878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.697989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.698025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.698187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.698223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.698332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.698374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.698512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.698547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.698659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.698694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.698809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.698845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.698986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.699021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.699167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.699202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.699309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.699352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.699463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.699499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.699605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.007 [2024-11-19 21:27:10.699637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.007 qpair failed and we were unable to recover it. 00:37:37.007 [2024-11-19 21:27:10.699780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.699815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.699960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.699995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.700153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.700202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.700391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.700441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.700562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.700600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.700751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.700787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.700943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.700980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.701094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.701148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.701294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.701342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.701494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.701543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.701665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.701702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.701836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.701871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.701993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.702030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.702149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.702185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.702322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.702366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.702505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.702539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.702689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.702725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.702840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.702877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.702999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.703048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.703214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.703259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.703369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.703405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.703521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.703557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.703704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.703738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.703871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.703906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.704034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.704078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.704247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.704282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.704396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.704432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.704575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.704611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.704726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.704761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.704882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.704928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.705060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.705120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.705223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.705257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.705368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.705404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.705553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.705594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.705710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.705746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.705848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.705883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.705993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.706028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.706171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.706221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.706362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.706400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.706528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.706565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.706701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.706752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.706892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.706927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.707042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.707084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.707223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.707273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.008 qpair failed and we were unable to recover it. 00:37:37.008 [2024-11-19 21:27:10.707411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.008 [2024-11-19 21:27:10.707450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.707627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.707662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.707791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.707831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.707994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.708034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.708160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.708196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.708301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.708337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.708477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.708513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.708619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.708654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.708794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.708830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.708992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.709042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.709208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.709246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.709352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.709387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.709520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.709555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.709688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.709723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.709860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.709902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.710026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.710066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.710188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.710223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.710336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.710371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.710474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.710509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.710648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.710682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.710813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.710847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.710955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.710994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.711138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.711188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.711320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.711359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.711496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.711533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.711666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.711701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.711865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.711900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.712086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.712122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.712222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.712257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.712372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.712407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.712539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.712574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.712714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.712749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.712864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.712899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.713053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.713119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.713259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.713296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.713466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.713502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.713610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.713646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.713810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.713846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.713961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.713997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.714112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.714149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.714287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.714323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.714464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.714500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.714667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.009 [2024-11-19 21:27:10.714703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.009 qpair failed and we were unable to recover it. 00:37:37.009 [2024-11-19 21:27:10.714818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.714852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.714993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.715028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.715149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.715185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.715319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.715354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.715467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.715503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.715644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.715679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.715826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.715862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.716002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.716038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.716191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.716229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.716370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.716420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.716539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.716576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.716714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.716750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.716860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.716902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.717016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.717053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.717207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.717244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.717395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.717444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.717587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.717624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.717767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.717803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.717907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.717942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.718105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.718154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.718270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.718307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.718418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.718453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.718561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.718596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.718762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.718798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.718904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.718940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.719083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.719118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.719232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.719267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.719382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.719419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.719550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.719584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.719695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.719730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.719870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.719905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.720021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.720059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.720173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.720209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.720319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.720355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.720495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.720530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.720640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.720676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.720781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.720818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.720937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.720973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.010 qpair failed and we were unable to recover it. 00:37:37.010 [2024-11-19 21:27:10.721085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.010 [2024-11-19 21:27:10.721121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.011 qpair failed and we were unable to recover it. 00:37:37.011 [2024-11-19 21:27:10.721285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.011 [2024-11-19 21:27:10.721335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.011 qpair failed and we were unable to recover it. 00:37:37.011 [2024-11-19 21:27:10.721486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.011 [2024-11-19 21:27:10.721524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.011 qpair failed and we were unable to recover it. 00:37:37.011 [2024-11-19 21:27:10.721675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.011 [2024-11-19 21:27:10.721711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.011 qpair failed and we were unable to recover it. 00:37:37.011 [2024-11-19 21:27:10.721822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.011 [2024-11-19 21:27:10.721857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.011 qpair failed and we were unable to recover it. 00:37:37.011 [2024-11-19 21:27:10.721964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.011 [2024-11-19 21:27:10.721999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.011 qpair failed and we were unable to recover it. 00:37:37.011 [2024-11-19 21:27:10.722136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.011 [2024-11-19 21:27:10.722171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.011 qpair failed and we were unable to recover it. 00:37:37.011 [2024-11-19 21:27:10.722288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.011 [2024-11-19 21:27:10.722327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.011 qpair failed and we were unable to recover it. 00:37:37.011 [2024-11-19 21:27:10.722442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.011 [2024-11-19 21:27:10.722478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.011 qpair failed and we were unable to recover it. 00:37:37.011 [2024-11-19 21:27:10.722598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.011 [2024-11-19 21:27:10.722634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.011 qpair failed and we were unable to recover it. 00:37:37.011 [2024-11-19 21:27:10.722744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.011 [2024-11-19 21:27:10.722779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.011 qpair failed and we were unable to recover it. 00:37:37.011 [2024-11-19 21:27:10.722935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.011 [2024-11-19 21:27:10.722985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.011 qpair failed and we were unable to recover it. 00:37:37.011 [2024-11-19 21:27:10.723108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.011 [2024-11-19 21:27:10.723146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.011 qpair failed and we were unable to recover it. 00:37:37.011 [2024-11-19 21:27:10.723274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.011 [2024-11-19 21:27:10.723314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.011 qpair failed and we were unable to recover it. 00:37:37.011 [2024-11-19 21:27:10.723457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.011 [2024-11-19 21:27:10.723498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.011 qpair failed and we were unable to recover it. 00:37:37.011 [2024-11-19 21:27:10.723667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.011 [2024-11-19 21:27:10.723702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.011 qpair failed and we were unable to recover it. 00:37:37.011 [2024-11-19 21:27:10.723809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.011 [2024-11-19 21:27:10.723854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.011 qpair failed and we were unable to recover it. 00:37:37.011 [2024-11-19 21:27:10.723972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 21:27:10.724007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 21:27:10.724139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 21:27:10.724189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 21:27:10.724316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 21:27:10.724355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 21:27:10.724468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 21:27:10.724506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 21:27:10.724680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 21:27:10.724716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 21:27:10.724859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 21:27:10.724895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 21:27:10.725010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 21:27:10.725046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 21:27:10.725172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 21:27:10.725208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 21:27:10.725331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 21:27:10.725367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 21:27:10.725506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 21:27:10.725542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 21:27:10.725649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 21:27:10.725683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 21:27:10.725811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 21:27:10.725860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 21:27:10.726001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 21:27:10.726053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 21:27:10.726191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 21:27:10.726237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.726365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.726400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.726510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.726545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.726651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.726687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.726797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.726833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.726949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.726984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.727105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.727145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.727290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.727327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.727460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.727496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.727626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.727661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.727768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.727804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.727949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.727998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.728129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.728166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.728306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.728341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.728473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.728508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.728672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.728707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.728821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.728858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.728996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.729036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.729205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.729255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.729373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.729409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.729545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.729580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.729696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.729731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.729894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.729929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.730044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.730088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.730241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.730296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.730437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.730476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.730614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.730649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.730787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.730823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.730965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.731014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.731166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.731203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.731357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.731407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.731552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.731591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.731725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.731761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.731905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.731941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.732048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.732091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.732194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.732229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.732329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.732364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.732506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 21:27:10.732542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 21:27:10.732682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.732718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.732863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.732913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.733082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.733128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.733277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.733317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.733487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.733523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.733669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.733705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.733821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.733856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.733993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.734030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.734198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.734234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.734385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.734436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.734590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.734626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.734738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.734773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.734947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.734982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.735108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.735148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.735276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.735325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.735495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.735533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.735700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.735736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.735831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.735866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.735984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.736021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.736132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.736180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.736291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.736326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.736489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.736523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.736636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.736672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.736855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.736894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.737056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.737115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.737259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.737296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.737432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.737472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.737576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.737611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.737749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.737785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.737914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.737950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.738083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.738119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.738241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.738281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.738411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.738446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.738551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.738586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.738688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.738724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.738864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.738899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.739032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.739067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 21:27:10.739213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 21:27:10.739248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.739389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.739426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.739533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.739568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.739741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.739777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.739891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.739926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.740140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.740190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.740331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.740369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.740511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.740548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.740701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.740736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.740839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.740873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.741034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.741083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.741225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.741260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.741376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.741411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.741517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.741552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.741652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.741687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.741833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.741870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.742048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.742111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.742241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.742279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.742391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.742428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.742666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.742703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.742815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.742851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.743014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.743049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.743201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.743238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.743417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.743467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.743645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.743683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.743845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.743881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.744032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.744067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.744203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.744241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.744404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.744440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.744602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.744638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.744752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.744787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.744900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.744934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.745077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.745112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.745272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.745307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.745442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.745477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.745673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.745709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.745849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.745883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.746023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 21:27:10.746058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 21:27:10.746225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.746260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.746391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.746427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.746671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.746705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.746849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.746884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.747005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.747079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.747251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.747302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.747452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.747491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.747634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.747670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.747811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.747847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.747962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.748012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.748135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.748171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.748308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.748357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.748479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.748516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.748663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.748699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.748836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.748871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.748981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.749016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.749142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.749180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.749297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.749333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.749492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.749531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.749722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.749757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.749863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.749898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.750025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.750086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.750242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.750291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.750419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.750471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.750646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.750682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.750822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.750859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.751010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.751047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.751182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.751231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.751381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.751421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.751531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.751569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.751724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.751761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.751895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.751942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.752090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.752127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.752242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.752277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.752420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.752460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.752651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.752686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.752819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.752854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.752951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 21:27:10.752987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 21:27:10.753131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.753168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.753315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.753364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.753476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.753515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.753651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.753687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.753806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.753842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.753955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.753990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.754123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.754159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.754333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.754368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.754492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.754527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.754667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.754702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.754815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.754850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.754987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.755022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.755189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.755224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.755405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.755456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.755642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.755691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.755838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.755875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.755992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.756028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.756170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.756206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.756318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.756355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.756528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.756564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.756678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.756718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.756873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.756923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.757084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.757122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.757271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.757307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.757445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.757480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.757616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.757652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.757761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.757796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.757935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.757972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.758133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.758180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.758348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.758382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.758519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.758553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 21:27:10.758670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 21:27:10.758704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.758863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.758896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.759013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.759048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.759192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.759239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.759404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.759451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.759595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.759630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.759767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.759801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.759960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.759993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.760111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.760146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.760285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.760318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.760420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.760454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.760583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.760616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.760771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.760807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.760916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.760950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.761096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.761143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.761293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.761327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.761493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.761527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.761632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.761666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.761800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.761833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.761974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.762008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.762159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.762193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.762308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.762345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.762473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.762507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.762672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.762706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.762818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.762851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.762972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.763019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.763174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.763220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.763341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.763376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.763490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.763525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.763655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.763695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.763807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.763840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.763979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.764012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.764153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.764201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.764324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.764361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.764526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.764560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.764695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.764728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.764849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.764885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.765038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 21:27:10.765096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 21:27:10.765217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.765254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.765375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.765415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.765526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.765562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.765708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.765744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.765850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.765886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.766032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.766076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.766230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.766270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.766415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.766451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.766559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.766595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.766702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.766738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.766874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.766910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.767058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.767125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.767285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.767322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.767445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.767481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.767614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.767649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.767765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.767800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.767960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.767996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.768114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.768150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.768268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.768306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.768412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.768448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.768583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.768618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.768756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.768792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.768898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.768934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.769065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.769106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.769238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.769274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.769409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.769444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.769583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.769618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.769761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.769797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.769943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.769979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.770082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.770117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.770226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.770261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.770398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.770438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.770592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.770627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.770764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.770800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.770940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.770974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.771092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.771128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.771258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.771297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 21:27:10.771445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 21:27:10.771486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.771613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.771649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.771787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.771821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.771959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.771994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.772161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.772211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.772330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.772367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.772511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.772546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.772646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.772681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.772829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.772864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.772973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.773007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.773125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.773161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.773302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.773337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.773473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.773508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.773619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.773653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.773747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.773782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.773911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.773946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.774162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.774197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.774325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.774360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.774458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.774493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.774604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.774639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.774768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.774818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.774975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.775014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.775132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.775168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.775284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.775321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.775435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.775471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.775611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.775649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.775785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.775821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.775944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.775979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.776113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.776148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.776257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.776292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.776428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.776463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.776567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.776602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.776740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.776777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.776947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.776982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.777098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.777140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.777256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.777292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.777412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.777448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.777558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.777594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 21:27:10.777707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 21:27:10.777743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.777861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.777896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.777998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.778033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.778138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.778173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.778274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.778309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.778422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.778457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.778569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.778605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.778773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.778808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.778924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.778960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.779096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.779132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.779275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.779309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.779450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.779485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.779615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.779650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.779823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.779860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.779996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.780031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.780154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.780189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.780296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.780330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.780465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.780500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.780638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.780673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.780803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.780837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.780970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.781005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.781149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.781187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.781323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.781360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.781525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.781575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.781692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.781727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.781832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.781867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.782005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.782040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.782157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.782197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.782329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.782378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.782498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.782537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.782643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.782680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.782803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.782839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.782944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.782978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 21:27:10.783080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 21:27:10.783116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.783222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.783257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.783379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.783429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.783607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.783650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.783755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.783792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.783950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.783985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.784122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.784158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.784310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.784359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.784503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.784541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.784684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.784721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.784860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.784896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.785024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.785090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.785225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.785262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.785388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.785425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.785565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.785602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.785707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.785742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.785845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.785880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.786029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.786065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.786196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.786235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.786387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.786423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.786557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.786592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.786696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.786731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.786867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.786903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.787032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.787078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.787245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.787291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.787455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.787491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.787603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.787638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.787755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.787791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.787930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.787966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.788093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.788128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.788268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.788303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.788440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.788475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.788585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.788620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.788750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.788785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.788919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.788955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.789064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.789112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.789261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.789311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 21:27:10.789440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 21:27:10.789476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.789690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.789725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.789937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.789972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.790135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.790171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.790305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.790340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.790450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.790485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.790592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.790631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.790740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.790775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.790933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.790982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.791170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.791219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.791368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.791405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.791541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.791584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.791695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.791730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.791866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.791915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.792024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.792059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.792232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.792266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.792401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.792435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.792600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.792634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.792781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.792821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.792939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.792976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.793097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.793147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.793260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.793295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.793457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.793492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.793644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.793678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.793794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.793829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.793972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.794010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.794156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.794194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.794297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.794333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.794448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.794483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.794648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.794684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.794853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.794903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.795044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.795100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.795281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.795319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.795460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.795496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.795634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.795669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.795799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.795834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.795933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.795968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.796106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.796141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 21:27:10.796273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 21:27:10.796323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.796437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.796477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.796622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.796659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.796796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.796831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.796968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.797003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.797130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.797166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.797309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.797345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.797477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.797513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.797648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.797687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.797846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.797881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.798019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.798054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.798190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.798224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.798345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.798381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.798518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.798552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.798678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.798713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.798814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.798849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.798993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.799028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.799145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.799181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.799321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.799357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.799490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.799525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.799637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.799671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.799801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.799836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.799948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.799983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.800196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.800231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.800372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.800408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.800541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.800576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.800693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.800728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.800867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.800903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.801019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.801055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.801178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.801215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.801362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.801397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.801508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.801543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.801657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.801692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.801801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.801836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.801978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.802013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.802175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.802224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.802344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.802380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.802546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 21:27:10.802582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 21:27:10.802709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.802744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.802851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.802887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.803011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.803061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.803217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.803254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.803357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.803393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.803494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.803528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.803681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.803716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.803830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.803870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.803993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.804029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.804151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.804187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.804289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.804329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.804444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.804480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.804619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.804654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.804762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.804798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.804950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.805000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.805168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.805209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.805378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.805415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.805562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.805598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.805734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.805770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.805887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.805922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.806053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.806109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.806257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.806295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.806439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.806474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.806614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.806648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.806759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.806794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.806906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.806941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.807047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.807090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.807244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.807281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.807430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.807478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.807632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.807669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.807788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.807824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.807931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.807966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.808075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.808119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.808228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.808264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.808380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.808415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.808521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.808556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.808664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.808699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.808836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 21:27:10.808884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 21:27:10.809051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.809115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.809248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.809286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.809398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.809434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.809576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.809611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.809710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.809745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.809859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.809893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.810063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.810109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.810237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.810287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.810434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.810471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.810611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.810648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.810761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.810797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.810928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.810963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.811066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.811112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.811248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.811288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.811396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.811430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.811579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.811615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.811751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.811786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.811892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.811926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.812029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.812064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.812194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.812243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.812403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.812443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.812551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.812586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.812714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.812750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.812851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.812885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.813018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.813052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.813193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.813228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.813361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.813401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.813518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.813555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.813661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.813696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.813860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.813895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.814001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.814037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.814159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.814208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.814353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.814390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 21:27:10.814505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 21:27:10.814541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.814682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.814716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.814884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.814932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.815038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.815081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.815190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.815225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.815329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.815364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.815472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.815508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.815647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.815682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.815797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.815834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.815988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.816038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.816168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.816205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.816347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.816383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.816520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.816556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.816700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.816735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.816877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.816913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.817095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.817145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.817301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.817350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.817466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.817503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.817642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.817679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.817809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.817850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.818008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.818057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.818191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.818229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.818380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.818418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.818556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.818592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.818694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.818729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.818838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.818873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.819042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.819091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.819195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.819230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.819334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.819369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.819505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.819540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.819735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.819785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.819955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.819992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.820117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.820155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.820302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.820338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.820446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.820482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.820637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.820672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.820815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 21:27:10.820850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 21:27:10.821000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.821035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.821149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.821185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.821298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.821332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.821466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.821502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.821608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.821643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.821763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.821800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.821979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.822014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.822131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.822167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.822300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.822335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.822489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.822539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.822680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.822719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.822821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.822857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.823022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.823057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.823185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.823235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.823391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.823430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.823616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.823653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.823776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.823816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.823961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.824007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.824191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.824228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.824345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.824382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.824495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.824535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.824687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.824722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.824851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.824892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.825024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.825059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.825226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.825275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.825394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.825434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.825574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.825610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.825746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.825782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.825919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.825956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.826099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.826149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.826269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.826305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.826445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.826480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.826596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.826632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.826768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.826803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.826910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.826945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.827084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.827119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.827260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 21:27:10.827295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 21:27:10.827464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.827500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.827633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.827668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.827779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.827814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.827914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.827949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.828081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.828131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.828284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.828323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.828475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.828513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.828653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.828688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.828794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.828831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.828993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.829028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.829168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.829205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.829343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.829392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.829514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.829552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.829660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.829697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.829826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.829861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.829980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.830016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.830136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.830172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.830276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.830312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.830476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.830512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.830635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.830672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.830811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.830847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.830976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.831012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.831181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.831217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.831357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.831393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.831535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.831569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.831703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.831744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.831856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.831894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.832030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.832065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 21:27:10.832217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 21:27:10.832253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.832369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.832404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.832544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.832580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.832717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.832753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.832887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.832923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.833107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.833157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.833311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.833362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.833481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.833518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.833658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.833693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.833828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.833863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.833991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.834026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.834189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.834225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.834337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.834371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.834530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.834565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.834703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.834738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.834857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.834907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.835062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.835119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.835261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.835298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.835432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.835467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.835602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.835637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.835767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.835803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.835909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.835947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.836102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.836152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.836289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.836338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.836507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.836550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.836651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.836686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.836827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.836863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.837043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.837100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.837250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.837286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.837415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.837450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.837614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.837650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.837755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.837790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.837928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.837968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.838135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.838185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.838319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.838369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.838488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.838526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.838638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.838675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.838835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 21:27:10.838870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 21:27:10.838990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.839025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.839187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.839225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.839369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.839408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.839553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.839589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.839718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.839753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.839894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.839930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.840087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.840137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.840282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.840318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.840421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.840458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.840610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.840645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.840754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.840789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.840979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.841030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.841185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.841222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.841346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.841396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.841533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.841578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.841688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.841724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.841863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.841899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.842084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.842134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.842266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.842315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.842438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.842476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.842659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.842695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.842827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.842863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.843003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.843053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.843209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.843246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.843365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.843404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.843514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.843551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 21:27:10.843709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 21:27:10.843750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.843865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.843900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.844016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.844051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.844179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.844214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.844348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.844385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.844552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.844588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.844686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.844721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.844874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.844909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.845044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.845090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.845231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.845266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.845406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.845442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.845581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.845616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.845718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.845753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.845865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.845901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.846036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.846078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.846185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.846219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.846336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.846372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.846518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.846567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.846711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.846748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.846901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.846937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.847043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.847099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.847262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.847297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.847436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.847473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.847642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.847677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.847785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.847821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.847928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.847964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.848114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.848163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.848299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.848349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.848479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.848518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.848627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.848664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.848800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.848836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.849013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.849050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.849173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.849209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 21:27:10.849353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 21:27:10.849403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.849552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.849590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.849704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.849740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.849901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.849936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.850062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.850123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.850304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.850342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.850457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.850494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.850632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.850677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.850820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.850856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.850981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.851018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.851169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.851205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.851313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.851347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.851452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.851486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.851612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.851649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.851790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.851836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.851963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.852000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.852160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.852210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.852348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.852386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.852485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.852521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.852653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.852688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.852853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.852902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.853050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.853094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.853209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.853246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.853388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.853425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.853533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.853568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.853719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.853755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.853871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.853908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.854088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.854147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.854292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.854328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.854471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.854506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.854609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.854644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.854805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.854840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.854977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.855024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.855212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.855262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.855435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.855473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.855609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 21:27:10.855644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 21:27:10.855801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.855836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.855976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.856011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.856176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.856212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.856317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.856353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.856494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.856530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.856662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.856697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.856807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.856842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.857011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.857061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.857240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.857290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.857413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.857452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.857588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.857623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.857759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.857800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.857937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.857987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.858137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.858174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.858306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.858342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.858502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.858537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.858671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.858706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.858805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.858840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.858951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.858986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.859126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.859166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.859290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.859328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.859470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.859506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.859637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.859672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.859839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.859874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.860003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.860052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.860217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.860253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.860379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.860414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.860549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.860583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.860721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.860756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.860911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 21:27:10.860960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 21:27:10.861084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.861121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.861261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.861296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.861431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.861467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.861624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.861660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.861792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.861842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.861984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.862020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.862164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.862200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.862307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.862343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.862489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.862529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.862692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.862728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.862872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.862908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.863022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.863057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.863171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.863206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.863308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.863344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.863482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.863517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.863651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.863687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.863832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.863869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.863995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.864032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.864160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.864196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.864329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.864365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.864505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.864541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.864678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.864719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.864879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.864915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.865035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.865081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.865216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.865251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.865411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.865446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.865578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.865613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.865718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.865753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.865897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.865934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.866109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.866159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.866297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.866346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.866519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.866555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.866691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.866726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.866889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.866924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.867055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.867096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.867215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.867252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.867364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.867400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.867545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 21:27:10.867581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 21:27:10.867688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.867723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.867856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.867891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.868045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.868106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.868215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.868252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.868390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.868425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.868586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.868621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.868749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.868784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.868934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.868983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.869129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.869166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.869302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.869337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.869481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.869517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.869620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.869656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.869822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.869858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.870021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.870058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.870212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.870251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.870408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.870457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.870569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.870607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.870752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.870788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.870925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.870960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.871126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.871163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.871271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.871308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.871434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.871469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.871601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.871636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.871746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.871790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.871937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.871986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.872132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.872168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.872283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.872319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.872463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.872499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.872606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.872641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.872779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.872816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.872927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.872962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 21:27:10.873122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 21:27:10.873172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.873303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.873342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.873505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.873542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.873702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.873737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.873851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.873886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.874024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.874062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.874229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.874279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.874391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.874427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.874582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.874618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.874729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.874764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.874911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.874946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.875061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.875103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.875266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.875302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.875440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.875475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.875611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.875646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.875785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.875820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.875964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.876003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.876137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.876187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.876310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.876348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.876469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.876506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.876687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.876723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.876880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.876929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.877086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.877126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.877260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.877309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.877421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.877458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.877593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.877628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.877735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.877770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.877912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.877947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.878082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.878117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.878229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 21:27:10.878267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 21:27:10.878387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.878426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.878594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.878630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.878766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.878807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.878939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.878974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.879111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.879146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.879249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.879283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.879424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.879459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.879591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.879626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.879741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.879777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.879917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.879954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.880105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.880155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.880275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.880313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.880474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.880510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.880676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.880711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.880820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.880856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.880992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.881038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.881204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.881254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.881379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.881418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.881550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.881586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.881723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.881759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.881927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.881963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.882120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.882170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.882317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.882353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.882492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.882529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.882674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.882709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.882844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.882879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.883013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.883049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.883199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.883235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.883399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.883449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.883610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.883659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.883812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.883851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.883984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.884020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.884155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.884192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.884306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.884342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.884458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 21:27:10.884493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 21:27:10.884645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.884681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.884857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.884893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.885012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.885047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.885172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.885208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.885366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.885401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.885537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.885576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.885729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.885769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.885931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.885972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.886120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.886170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.886288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.886333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.886478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.886514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.886624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.886660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.886837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.886872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.887006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.887042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.887195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.887238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.887385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.887424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.887534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.887572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.887674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.887710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.887817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.887852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.887992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.888028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.888175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.888211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.888371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.888421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.888540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.888577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.888700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.888736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.888880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.888915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.889018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.889053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.889243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.889281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.889444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.889479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.889620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.889656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.889760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.889795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.889930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.889967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.890149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.890198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.890342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.890378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.890494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.890530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.890677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.890714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.890875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 21:27:10.890910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 21:27:10.891056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.891100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.891233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.891268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.891429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.891478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.891626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.891664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.891807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.891843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.891978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.892014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.892161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.892197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.892334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.892369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.892531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.892567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.892696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.892731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.892860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.892895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.893042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.893108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.893263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.893314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.893430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.893469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.893605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.893642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.893770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.893806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.893926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.893962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.894102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.894139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.894298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.894334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.894439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.894474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.894609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.894644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.894773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.894809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.894944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.894979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.895099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.895135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.895270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.895305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.895442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.895477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.895578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.895613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.895770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.895805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.895971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.896021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.896151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.896189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.896325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.896360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.896520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.896556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.896698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.896734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.896875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.896910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.897011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.897047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.897177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.897226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.897338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.897376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.334 [2024-11-19 21:27:10.897541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.334 [2024-11-19 21:27:10.897577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.334 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.897745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.897782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.897935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.897996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.898147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.898184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.898344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.898379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.898488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.898529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.898702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.898737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.898854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.898890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.899035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.899078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.899194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.899241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.899375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.899412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.899547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.899582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.899691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.899727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.899887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.899923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.900076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.900118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.900246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.900296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.900460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.900509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.900652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.900690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.900825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.900861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.900968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.901003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.901154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.901192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.901334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.901370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.901510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.901545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.901683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.901718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.901879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.901914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.902066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.902128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.902311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.902350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.902514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.902561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.902733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.902768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.902946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.902996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.903164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.903213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.903383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.903420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.335 [2024-11-19 21:27:10.903530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.335 [2024-11-19 21:27:10.903565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.335 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.903673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.903708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.903867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.903902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.904053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.904113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.904246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.904284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.904395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.904432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.904543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.904580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.904727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.904777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.904918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.904956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.905087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.905136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.905288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.905327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.905490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.905526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.905630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.905666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.905799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.905836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.905991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.906040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.906188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.906238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.906384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.906421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.906564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.906599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.906711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.906747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.906880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.906917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.907028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.907064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.907258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.907297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.907410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.907462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.907604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.907639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.907749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.907785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.907927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.907962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.908120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.908156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.908261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.908296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.908422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.908457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.908560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.908595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.908742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.908782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.908927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.908963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.909075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.909110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.909221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.909256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.909439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.909489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.909632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.909669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.909806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.909842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.909952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.909987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.336 qpair failed and we were unable to recover it. 00:37:37.336 [2024-11-19 21:27:10.910120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.336 [2024-11-19 21:27:10.910155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.910297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.910332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.910489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.910524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.910683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.910718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.910830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.910866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.911009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.911044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.911165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.911200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.911319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.911369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.911542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.911591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.911768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.911806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.911934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.911970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.912111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.912149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.912289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.912324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.912458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.912494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.912624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.912658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.912772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.912807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.912931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.912966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.913098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.913132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.913294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.913328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.913447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.913481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.913592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.913627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.913758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.913793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.913921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.913955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.914106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.914155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.914314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.914357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.914468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.914504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.914646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.914683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.914829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.914865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.914976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.915012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.915144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.915180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.915339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.915388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.915546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.915596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.915711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.915748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.915890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.915926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.916029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.916063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.916208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.916245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.916370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.916405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.916510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.337 [2024-11-19 21:27:10.916546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.337 qpair failed and we were unable to recover it. 00:37:37.337 [2024-11-19 21:27:10.916692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.916728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.916865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.916900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.917166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.917216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.917340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.917377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.917487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.917523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.917660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.917695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.917860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.917896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.918035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.918077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.918221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.918257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.918395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.918431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.918545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.918582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.918678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.918713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.918824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.918859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.919000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.919037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.919166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.919202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.919371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.919408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.919514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.919550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.919681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.919716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.919854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.919889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.920076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.920126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.920277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.920314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.920443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.920479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.920640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.920675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.920780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.920816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.920929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.920965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.921134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.921170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.921291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.921346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.921531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.921569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.921707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.921742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.921873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.921907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.922057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.922102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.922230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.922265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.922400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.922436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.922570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.922606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.922718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.922753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.922870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.922905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.923038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.923082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.338 [2024-11-19 21:27:10.923219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.338 [2024-11-19 21:27:10.923254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.338 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.923368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.923403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.923536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.923570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.923714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.923749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.923882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.923916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.924048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.924089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.924226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.924262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.924403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.924439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.924576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.924612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.924715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.924750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.924865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.924899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.925047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.925090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.925223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.925258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.925393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.925428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.925565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.925600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.925744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.925779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.925917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.925957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.926087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.926122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.926286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.926321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.926451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.926494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.926656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.926691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.926823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.926858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.926970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.927005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.927142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.927177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.927337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.927372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.927500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.927535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.927668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.927707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.927856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.927891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.928056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.928098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.928214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.928248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.928377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.928426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.928545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.928583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.928725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.928760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.928870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.928905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.929061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.929104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.929218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.929253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.929387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.929422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.929533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.929568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.339 qpair failed and we were unable to recover it. 00:37:37.339 [2024-11-19 21:27:10.929665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.339 [2024-11-19 21:27:10.929700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.929842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.929878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.930014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.930049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.930160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.930195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.930328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.930364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.930501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.930536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.930642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.930676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.930846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.930883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.931019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.931053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.931168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.931203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.931339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.931374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.931490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.931526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.931661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.931697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.931832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.931868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.931978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.932013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.932157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.932206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.932352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.932390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.932522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.932558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.932725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.932766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.932877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.932913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.933057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.933113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.933236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.933274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.933414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.933450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.933596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.933631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.933781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.933832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.933973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.934010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.934146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.934184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.934324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.934361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.934466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.934501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.934641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.934677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.934818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.934853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.935008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.935058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.935214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.935251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.340 qpair failed and we were unable to recover it. 00:37:37.340 [2024-11-19 21:27:10.935361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.340 [2024-11-19 21:27:10.935398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.935559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.935594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.935700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.935736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.935908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.935943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.936053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.936099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.936238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.936273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.936439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.936476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.936641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.936676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.936813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.936850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.936988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.937024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.937164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.937214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.937388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.937425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.937572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.937607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.937749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.937783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.937919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.937953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.938094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.938129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.938243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.938277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.938418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.938452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.938588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.938623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.938732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.938767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.938902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.938937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.939063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.939104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.939208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.939256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.939364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.939399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.939525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.939561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.939727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.939767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.939905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.939942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.940067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.940108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.940219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.940254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.940366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.940402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.940516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.940565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.940699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.940738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.940891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.940940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.941087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.941142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.941253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.941289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.941401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.941437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.941574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.941610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.941749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.341 [2024-11-19 21:27:10.941784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.341 qpair failed and we were unable to recover it. 00:37:37.341 [2024-11-19 21:27:10.941888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.941923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.942040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.942084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.942214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.942249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.942384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.942419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.942538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.942574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.942677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.942711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.942828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.942865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.943012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.943049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.943216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.943265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.943408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.943446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.943578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.943615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.943729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.943778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.943893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.943930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.944063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.944105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.944221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.944256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.944360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.944396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.944508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.944543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.944659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.944700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.944819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.944855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.945015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.945065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.945227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.945277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.945398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.945435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.945573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.945608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.945751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.945787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.945901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.945937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.946078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.946113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.946226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.946261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.946381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.946425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.946564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.946601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.946747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.946796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.946947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.946987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.947125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.947164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.947278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.947314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.947451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.947487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.947603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.947638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.947744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.947779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.947907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.947943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.342 [2024-11-19 21:27:10.948055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.342 [2024-11-19 21:27:10.948099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.342 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.948211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.948246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.948359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.948393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.948505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.948539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.948664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.948700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.948836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.948873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.948988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.949026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.949139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.949175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.949285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.949320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.949457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.949493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.949600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.949635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.949740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.949776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.949919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.949968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.950097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.950134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.950297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.950333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.950448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.950483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.950647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.950682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.950803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.950840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.950978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.951013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.951134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.951174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.951284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.951320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.951460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.951495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.951614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.951651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.951776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.951812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.951943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.951992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.952139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.952175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.952287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.952322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.952434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.952468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.952596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.952630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.952755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.952800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.952919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.952961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.953112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.953147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.953254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.953289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.953488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.953524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.953657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.953704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.953849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.953896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.954027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.954063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.954198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.954234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.343 [2024-11-19 21:27:10.954338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.343 [2024-11-19 21:27:10.954373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.343 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.954511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.954546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.954662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.954697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.954864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.954899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.955005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.955040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.955215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.955256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.955409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.955446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.955585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.955621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.955732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.955768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.955910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.955946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.956123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.956171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.956316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.956352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.956496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.956533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.956648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.956683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.956815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.956851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.956955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.956989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.957144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.957180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.957313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.957347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.957480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.957516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.957683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.957718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.957831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.957867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.958003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.958038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.958180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.958216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.958335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.958370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.958531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.958566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.958708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.958743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.958883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.958919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.959030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.959066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.959182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.959218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.959344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.959394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.959575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.959624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.959756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.959794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.959915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.959956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.344 qpair failed and we were unable to recover it. 00:37:37.344 [2024-11-19 21:27:10.960078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.344 [2024-11-19 21:27:10.960115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.960224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.960259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.960371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.960406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.960517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.960552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.960682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.960717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.960888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.960925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.961039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.961080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.961192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.961226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.961363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.961398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.961561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.961595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.961728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.961762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.961892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.961929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.962086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.962121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.962245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.962281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.962392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.962430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.962539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.962574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.962708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.962743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.962849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.962885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.963017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.963052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.963182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.963217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.963357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.963392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.963542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.963586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.963748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.963783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.963920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.963956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.964092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.964127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.964265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.964300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.964466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.964501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.964609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.964643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.964777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.964812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.964948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.964988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.965105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.965142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.965298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.965347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.965475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.965510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.965622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.965657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.965789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.965825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.965955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.965990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.966135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.966170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.966297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.966332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.345 qpair failed and we were unable to recover it. 00:37:37.345 [2024-11-19 21:27:10.966467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.345 [2024-11-19 21:27:10.966502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.966677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.966715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.966853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.966888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.967025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.967059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.967175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.967210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.967330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.967367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.967512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.967548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.967656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.967691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.967795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.967830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.967961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.967996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.968117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.968153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.968295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.968330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.968475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.968510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.968640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.968674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.968786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.968820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.968984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.969019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.969156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.969191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.969301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.969335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.969498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.969533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.969671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.969705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.969837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.969872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.969988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.970023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.970158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.970206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.970324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.970359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.970470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.970506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.970617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.970652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.970785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.970821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.970957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.970992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.971110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.971146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.971272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.971308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.971423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.971458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.971593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.971628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.971758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.971793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.971930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.971965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.972127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.972164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.972308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.972367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.972528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.972577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.972705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.346 [2024-11-19 21:27:10.972743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.346 qpair failed and we were unable to recover it. 00:37:37.346 [2024-11-19 21:27:10.972860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.972907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.973005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.973041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.973178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.973214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.973324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.973364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.973464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.973498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.973646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.973681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.973796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.973832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.973994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.974029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.974161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.974199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.974320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.974356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.974484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.974533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.974703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.974740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.974876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.974912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.975030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.975065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.975188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.975223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.975342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.975377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.975482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.975518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.975635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.975670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.975815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.975851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.975991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.976028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.976164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.976213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.976334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.976370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.976509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.976544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.976649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.976685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.976798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.976832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.977010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.977059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.977190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.977226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.977335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.977371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.977545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.977581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.977694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.977729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.977840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.977875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.977988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.978022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.978166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.978202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.978306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.978341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.978452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.978488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.978610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.978645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.978756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.978791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.347 [2024-11-19 21:27:10.978907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.347 [2024-11-19 21:27:10.978942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.347 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.979047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.979090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.979227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.979262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.979381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.979419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.979585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.979620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.979724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.979759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.979890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.979932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.980059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.980133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.980290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.980339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.980479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.980527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.980665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.980700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.980834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.980868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.980969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.981004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.981136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.981185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.981358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.981416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.981550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.981589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.981709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.981744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.981904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.981939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.982084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.982120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.982259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.982294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.982437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.982473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.982611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.982647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.982775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.982810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.982927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.982962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.983082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.983118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.983247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.983282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.983389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.983425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.983555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.983590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.983728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.983774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.983907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.983943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.984066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.984120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.984223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.984258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.984390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.984440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.984607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.984656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.984804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.984852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.984964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.984999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.985112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.985147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.985257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.985293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.985423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.985458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.985567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.348 [2024-11-19 21:27:10.985601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.348 qpair failed and we were unable to recover it. 00:37:37.348 [2024-11-19 21:27:10.985740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.985776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.985889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.985925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.986043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.986101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.986224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.986264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.986383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.986419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.986554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.986589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.986753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.986793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.986929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.986964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.987096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.987131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.987260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.987300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.987409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.987446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.987565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.987602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.987709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.987745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.987881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.987917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.988042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.988105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.988223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.988259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.988398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.988433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.988568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.988602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.988768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.988802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.988939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.988973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.989119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.989155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.989312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.989351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.989478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.989513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.989624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.989659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.989766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.989800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.989960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.989995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.990098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.990134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.990267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.990301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.990411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.990446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.990578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.990623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.990738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.990779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.990901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.990941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.991100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.991151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.991288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.991338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.991474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.991522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.991650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.349 [2024-11-19 21:27:10.991687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.349 qpair failed and we were unable to recover it. 00:37:37.349 [2024-11-19 21:27:10.991798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.991834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.991967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.992001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.992116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.992151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.992299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.992337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.992454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.992491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.992608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.992644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.992777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.992811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.992916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.992951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.993067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.993111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.993221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.993256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.993373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.993413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.993518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.993554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.993667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.993704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.993856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.993905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.994026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.994064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.994189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.994226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.994353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.994390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.994534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.994571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.994697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.994733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.994854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.994890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.995022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.995057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.995191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.995226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.995330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.995365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.995506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.995541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.995682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.995717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.995828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.995864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.995967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.996002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.996132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.996168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.996288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.996323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.996479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.996514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.996620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.996655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.996793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.996829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.996951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.997000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.997120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.997159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.997271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.997307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.997435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.997471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.997588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.997624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.997764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.997800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.997936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.997972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.998097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.998146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.998266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.350 [2024-11-19 21:27:10.998304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.350 qpair failed and we were unable to recover it. 00:37:37.350 [2024-11-19 21:27:10.998416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:10.998451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:10.998589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:10.998624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:10.998763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:10.998798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:10.998909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:10.998944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:10.999097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:10.999133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:10.999243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:10.999278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:10.999399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:10.999435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:10.999572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:10.999607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:10.999720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:10.999757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:10.999936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:10.999991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.000108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.000145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.000271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.000307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.000422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.000469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.000580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.000616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.000750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.000786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.000931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.000966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.001123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.001172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.001289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.001326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.001461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.001496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.001610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.001644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.001814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.001850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.001959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.001994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.002118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.002155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.002285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.002323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.002460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.002496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.002631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.002666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.002801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.002835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.002940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.002975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.003083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.003120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.003251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.003286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.003405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.003440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.003541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.003577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.003704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.003741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.003867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.003916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.004067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.004109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.004224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.004261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.004410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.004446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.004560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.004595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.004701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.004736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.004873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.004908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.351 qpair failed and we were unable to recover it. 00:37:37.351 [2024-11-19 21:27:11.005043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.351 [2024-11-19 21:27:11.005083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.005195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.005230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.005330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.005365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.005469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.005504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.005642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.005677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.005795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.005830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.005963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.005997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.006133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.006183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.006299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.006335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.006447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.006486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.006643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.006680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.006779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.006813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.006973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.007023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.007148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.007185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.007302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.007341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.007450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.007487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.007616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.007651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.007760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.007795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.007929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.007965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.008093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.008133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.008245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.008281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.008388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.008423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.008550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.008587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.008727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.008762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.008875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.008923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.009077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.009115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.009255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.009291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.009423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.009457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.009566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.009601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.009740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.009775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.009886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.009921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.010082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.010118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.010229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.010264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.010432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.010469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.010610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.010646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.010795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.010830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.010963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.011001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.011143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.011179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.011310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.011359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.011504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.011541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.352 qpair failed and we were unable to recover it. 00:37:37.352 [2024-11-19 21:27:11.011718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.352 [2024-11-19 21:27:11.011755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.011871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.011918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.012028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.012064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.012175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.012211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.012318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.012354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.012459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.012494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.012594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.012630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.012777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.012812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.012949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.012985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.013143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.013179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.013304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.013340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.013469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.013519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.013672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.013710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.013842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.013878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.013988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.014023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.014139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.014174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.014309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.014344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.014563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.014605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.014714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.014749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.014972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.015007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.015112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.015147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.015265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.015300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.015462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.015497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.015616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.015650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.015785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.015820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.015963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.015999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.016151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.016201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.016322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.016371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.016509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.016545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.016691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.016727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.016861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.016897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.017002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.017038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.353 [2024-11-19 21:27:11.017168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.353 [2024-11-19 21:27:11.017203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.353 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.017308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.017343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.017448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.017483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.017602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.017637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.017751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.017791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.017888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.017923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.018057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.018101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.018239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.018289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.018450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.018494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.018605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.018641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.018780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.018815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.018913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.018947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.019084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.019120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.019254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.019290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.019437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.019472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.019620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.019655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.019756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.019791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.019952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.019986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.020104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.020139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.020249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.020284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.020396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.020430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.020586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.020621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.020760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.020795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.020917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.020954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.021057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.354 [2024-11-19 21:27:11.021103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.354 qpair failed and we were unable to recover it. 00:37:37.354 [2024-11-19 21:27:11.021244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.021280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.021401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.021436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.021576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.021611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.021749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.021785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.021925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.021961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.022102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.022152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.022277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.022315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.022432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.022469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.022572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.022607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.022712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.022747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.022885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.022922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.023043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.023100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.023233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.023283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.023464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.023502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.023613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.023649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.023799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.023834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.023944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.023979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.024100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.024137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.024270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.024320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.024442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.024484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.024616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.024651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.024761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.024796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.024905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.024940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.025092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.025142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.025281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.025318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.025483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.025518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.025614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.025649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.025756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.025792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.025906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.025942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.026095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.026132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.026269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.026304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.026413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.026448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.026553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.026588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.355 qpair failed and we were unable to recover it. 00:37:37.355 [2024-11-19 21:27:11.026705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.355 [2024-11-19 21:27:11.026742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.026854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.026890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.026996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.027032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.027161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.027198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.027335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.027371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.027479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.027515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.027621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.027658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.027771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.027807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.027941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.027977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.028081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.028129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.028258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.028293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.028416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.028451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.028575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.028611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.028750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.028799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.028921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.028959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.029109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.029151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.029290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.029326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.029470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.029506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.029609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.029643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.029775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.029811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.029920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.029955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.030120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.030155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.030258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.030295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.030435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.030469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.030618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.030667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.030786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.030823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.030940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.030995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.031110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.031148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.031257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.031293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.031394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.031430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.031584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.031620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.031728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.031763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.031901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.031939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.032048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.032092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.032212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.032247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.356 [2024-11-19 21:27:11.032383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.356 [2024-11-19 21:27:11.032418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.356 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.032553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.032589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.032701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.032735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.032834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.032868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.032963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.032998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.033149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.033199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.033353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.033391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.033500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.033547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.033659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.033695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.033842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.033880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.034044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.034110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.034239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.034275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.034389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.034423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.034529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.034564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.034676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.034712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.034847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.034883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.035021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.035057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.035182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.035219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.035361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.035396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.035505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.035540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.035695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.035745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.035864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.035900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.036045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.036103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.036222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.036258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.036369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.036405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.036513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.036550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.036713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.036748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.036881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.036915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.037051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.037095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.357 [2024-11-19 21:27:11.037207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.357 [2024-11-19 21:27:11.037247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.357 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.037367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.037407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.037518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.037558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.037704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.037740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.037867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.037904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.038043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.038086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.038212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.038248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.038396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.038446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.038634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.038671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.038809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.038845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.038954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.038989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.039099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.039134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.039236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.039271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.039374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.039409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.039570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.039605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.039719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.039754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.039900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.039936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.040076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.040112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.040243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.040278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.040417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.040452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.040569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.040604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.040743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.040778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.040889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.040924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.041041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.041088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.041210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.041245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.041412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.041447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.041592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.041627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.041739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.041780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.041888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.041925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.042086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.042135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.042275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.042325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.358 qpair failed and we were unable to recover it. 00:37:37.358 [2024-11-19 21:27:11.042496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.358 [2024-11-19 21:27:11.042534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.042676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.359 [2024-11-19 21:27:11.042712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.042821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.359 [2024-11-19 21:27:11.042857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.042994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.359 [2024-11-19 21:27:11.043044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.043165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.359 [2024-11-19 21:27:11.043201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.043317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.359 [2024-11-19 21:27:11.043355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.043475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.359 [2024-11-19 21:27:11.043512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.043623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.359 [2024-11-19 21:27:11.043658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.043784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.359 [2024-11-19 21:27:11.043828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.043990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.359 [2024-11-19 21:27:11.044025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.044180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.359 [2024-11-19 21:27:11.044230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.044347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.359 [2024-11-19 21:27:11.044387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.044523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.359 [2024-11-19 21:27:11.044559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.044656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.359 [2024-11-19 21:27:11.044690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.044803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.359 [2024-11-19 21:27:11.044838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.044958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.359 [2024-11-19 21:27:11.044995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.045123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.359 [2024-11-19 21:27:11.045172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.045315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.359 [2024-11-19 21:27:11.045354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.045496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.359 [2024-11-19 21:27:11.045538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.045649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.359 [2024-11-19 21:27:11.045684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.045792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.359 [2024-11-19 21:27:11.045827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.045959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.359 [2024-11-19 21:27:11.045993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.359 qpair failed and we were unable to recover it. 00:37:37.359 [2024-11-19 21:27:11.046105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.046141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.046271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.046306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.046417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.046451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.046574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.046609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.046715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.046750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.046865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.046901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.047032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.047067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.047211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.047246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.047375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.047410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.047519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.047555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.047689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.047723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.047837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.047873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.048040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.048082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.048181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.048216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.048328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.048362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.048492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.048526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.048647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.048682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.048818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.048852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.048983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.049018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.049134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.049170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.049276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.049311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.049532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.049567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.049666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.049701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.049841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.049876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.049980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.050014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.050157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.050192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.050322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.050356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.050450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.050485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.050589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.050624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.050783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.050822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.050956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.360 [2024-11-19 21:27:11.051005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.360 qpair failed and we were unable to recover it. 00:37:37.360 [2024-11-19 21:27:11.051155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.051205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.051330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.051368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.051485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.051521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.051630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.051666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.051777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.051814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.051959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.051994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.052143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.052183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.052298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.052335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.052471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.052506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.052662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.052697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.052821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.052870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.053021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.053059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.053209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.053259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.053369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.053406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.053556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.053591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.053700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.053741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.053883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.053931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.054096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.054145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.054261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.054299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.054438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.054473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.054595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.054630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.054770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.054816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.054932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.054969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.055099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.055148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.055275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.055324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.055479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.055517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.055626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.055662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.055804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.055840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.055939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.055974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.056120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.056156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.056305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.361 [2024-11-19 21:27:11.056343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.361 qpair failed and we were unable to recover it. 00:37:37.361 [2024-11-19 21:27:11.056478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.056515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.056630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.056665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.056781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.056816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.056944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.056980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.057092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.057140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.057281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.057316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.057442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.057484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.057600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.057641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.057783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.057819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.057969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.058004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.058160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.058196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.058302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.058337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.058450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.058485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.058588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.058624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.058731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.058768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.058884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.058919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.059045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.059102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.059228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.059264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.059409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.059445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.059586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.059621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.059768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.059804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.059920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.059955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.060061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.060107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.060237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.060286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.060409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.060446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.060555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.060591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.060720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.060755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.060884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.060919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.061031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.362 [2024-11-19 21:27:11.061067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.362 qpair failed and we were unable to recover it. 00:37:37.362 [2024-11-19 21:27:11.061190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.363 [2024-11-19 21:27:11.061227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.363 qpair failed and we were unable to recover it. 00:37:37.363 [2024-11-19 21:27:11.061347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.363 [2024-11-19 21:27:11.061384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.363 qpair failed and we were unable to recover it. 00:37:37.363 [2024-11-19 21:27:11.061515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.363 [2024-11-19 21:27:11.061550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.363 qpair failed and we were unable to recover it. 00:37:37.363 [2024-11-19 21:27:11.061697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.363 [2024-11-19 21:27:11.061733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.363 qpair failed and we were unable to recover it. 00:37:37.363 [2024-11-19 21:27:11.061847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.363 [2024-11-19 21:27:11.061894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.363 qpair failed and we were unable to recover it. 00:37:37.363 [2024-11-19 21:27:11.062015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.363 [2024-11-19 21:27:11.062052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.363 qpair failed and we were unable to recover it. 00:37:37.363 [2024-11-19 21:27:11.062182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.363 [2024-11-19 21:27:11.062218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.363 qpair failed and we were unable to recover it. 00:37:37.363 [2024-11-19 21:27:11.062374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.363 [2024-11-19 21:27:11.062411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.363 qpair failed and we were unable to recover it. 00:37:37.363 [2024-11-19 21:27:11.062521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.363 [2024-11-19 21:27:11.062556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.363 qpair failed and we were unable to recover it. 00:37:37.363 [2024-11-19 21:27:11.062679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.363 [2024-11-19 21:27:11.062715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.363 qpair failed and we were unable to recover it. 00:37:37.363 [2024-11-19 21:27:11.062852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.363 [2024-11-19 21:27:11.062887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.363 qpair failed and we were unable to recover it. 00:37:37.363 [2024-11-19 21:27:11.062988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.363 [2024-11-19 21:27:11.063023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.363 qpair failed and we were unable to recover it. 00:37:37.363 [2024-11-19 21:27:11.063149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.363 [2024-11-19 21:27:11.063186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.363 qpair failed and we were unable to recover it. 00:37:37.363 [2024-11-19 21:27:11.063322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.363 [2024-11-19 21:27:11.063358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.363 qpair failed and we were unable to recover it. 00:37:37.363 [2024-11-19 21:27:11.063471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.363 [2024-11-19 21:27:11.063506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.363 qpair failed and we were unable to recover it. 00:37:37.363 [2024-11-19 21:27:11.063622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.363 [2024-11-19 21:27:11.063658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.658 qpair failed and we were unable to recover it. 00:37:37.658 [2024-11-19 21:27:11.063778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.658 [2024-11-19 21:27:11.063814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.658 qpair failed and we were unable to recover it. 00:37:37.658 [2024-11-19 21:27:11.063924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.063959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.064076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.064120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.064227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.064262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.064389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.064439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.064571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.064607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.064716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.064752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.064970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.065005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.065111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.065147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.065258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.065293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.065422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.065458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.065561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.065596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.065705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.065741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.065841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.065877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.065983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.066018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.066129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.066165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.066280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.066317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.066435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.066470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.066591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.066628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.066765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.066801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.066928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.066978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.067097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.067136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.067250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.067287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.067433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.067469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.067611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.067646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.067784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.067819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.067958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.067994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.068102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.068137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.068233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.068268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.068382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.068418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.068552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.068587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.068728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.068767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.068873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.068909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.069042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.069099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.069246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.069283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.659 [2024-11-19 21:27:11.069431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.659 [2024-11-19 21:27:11.069466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.659 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.069577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.069613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.069754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.069790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.069897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.069935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.070056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.070126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.070249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.070286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.070403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.070438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.070540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.070581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.070689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.070724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.070840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.070877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.070980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.071016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.071172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.071208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.071321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.071366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.071488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.071523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.071624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.071659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.071794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.071830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.071940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.071977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.072130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.072167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.072305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.072346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.072455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.072490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.072622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.072657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.072774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.072811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.072918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.072954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.073056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.073098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.073240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.073275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.073419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.073454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.073561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.073597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.073742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.073778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.073893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.073927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.074055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.074113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.074246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.074285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.074518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.074554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.074723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.074758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.074869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.074904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.075021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.075057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.075208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.075257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.075382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.075420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.075541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.075577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.660 qpair failed and we were unable to recover it. 00:37:37.660 [2024-11-19 21:27:11.075710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.660 [2024-11-19 21:27:11.075745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.075883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.075919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.076024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.076060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.076197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.076234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.076368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.076404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.076514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.076550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.076654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.076689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.076824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.076861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.076999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.077034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.077165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.077219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.077351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.077387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.077494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.077531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.077646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.077681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.077792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.077827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.077955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.077991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.078136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.078172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.078291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.078341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.078570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.078607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.078719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.078755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.078897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.078932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.079050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.079097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.079248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.079284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.079404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.079440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.079549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.079584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.079751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.079787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.079889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.079924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.080064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.080105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.080241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.080275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.080457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.080493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.080600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.080634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.080747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.080782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.080945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.080980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.081093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.081141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.081256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.081292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.081413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.661 [2024-11-19 21:27:11.081449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.661 qpair failed and we were unable to recover it. 00:37:37.661 [2024-11-19 21:27:11.081561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.081595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.081734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.081769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.081881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.081918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.082040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.082096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.082248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.082286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.082431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.082468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.082576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.082613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.082752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.082789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.082936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.082973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.083118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.083154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.083369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.083404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.083539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.083575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.083685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.083721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.083868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.083905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.084024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.084094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.084221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.084258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.084402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.084439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.084588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.084623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.084736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.084770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.084882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.084919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.085063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.085105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.085215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.085250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.085364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.085399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.085614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.085649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.085759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.085795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.085907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.085945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.086102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.086157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.086313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.086372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.086525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.086563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.086708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.086744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.086881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.086917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.087038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.087095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.087225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.087274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.087447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.087498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.087617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.087655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.087765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.087801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.087944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.087980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.662 qpair failed and we were unable to recover it. 00:37:37.662 [2024-11-19 21:27:11.088096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.662 [2024-11-19 21:27:11.088142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.088258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.088308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.088433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.088470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.088586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.088622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.088766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.088801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.088926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.088975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.089124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.089161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.089299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.089346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.089454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.089490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.089598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.089634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.089749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.089784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.089942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.089977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.090092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.090138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.090255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.090292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.090402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.090437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.090549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.090585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.090689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.090724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.090886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.090926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.091042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.091106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.091253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.091289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.091411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.091449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.091560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.091596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.091737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.091771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.091877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.091912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.092063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.092131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.092258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.092298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.092423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.092459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.092565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.663 [2024-11-19 21:27:11.092601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.663 qpair failed and we were unable to recover it. 00:37:37.663 [2024-11-19 21:27:11.092731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.092767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.092913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.092951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.093092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.093138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.093374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.093414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.093560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.093596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.093705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.093741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.093889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.093924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.094056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.094120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.094277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.094335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.094447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.094484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.094623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.094658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.094803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.094839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.094977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.095015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.095168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.095218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.095333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.095370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.095471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.095507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.095625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.095662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.095776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.095812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.095924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.095961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.096067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.096120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.096235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.096269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.096419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.096455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.096576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.096611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.096714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.096748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.096851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.096887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.097006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.097056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.097228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.097277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.097405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.097443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.097583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.097619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.097726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.097762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.097905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.097940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.098084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.098131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.098243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.098279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.098432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.098468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.098576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.098612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.098715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.098751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.098849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.098886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.664 [2024-11-19 21:27:11.099001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.664 [2024-11-19 21:27:11.099036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.664 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.099165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.099200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.099310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.099350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.099455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.099490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.099600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.099636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.099765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.099802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.099943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.099992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.100222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.100271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.100408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.100444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.100552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.100588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.100694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.100730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.100844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.100880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.101014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.101064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.101235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.101273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.101433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.101470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.101580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.101616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.101727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.101763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.101873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.101911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.102037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.102095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.102226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.102270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.102419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.102454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.102565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.102600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.102750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.102785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.102920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.102955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.103098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.103151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.103288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.103346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.103463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.103500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.103637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.103672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.103788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.103823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.103978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.104028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.104187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.104223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.104347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.104382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.104551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.104585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.104723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.104757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.104896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.665 [2024-11-19 21:27:11.104931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.665 qpair failed and we were unable to recover it. 00:37:37.665 [2024-11-19 21:27:11.105107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.105144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.105265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.105305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.105429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.105466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.105570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.105606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.105737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.105773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.105926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.105975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.106123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.106171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.106286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.106322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.106450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.106487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.106597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.106634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.106737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.106772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.106906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.106942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.107060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.107111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.107235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.107284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.107432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.107470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.107575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.107611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.107705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.107740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.107862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.107896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.108010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.108046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.108205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.108254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.108387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.108424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.108539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.108574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.108685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.108721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.108829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.108865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.109004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.109046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.109184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.109220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.109342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.109377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.109546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.109581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.109701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.109738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.109864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.109904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.110047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.110091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.110256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.110306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.110426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.110473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.110616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.110652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.110759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.110794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.110912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.110947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.111082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.111127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.666 [2024-11-19 21:27:11.111269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.666 [2024-11-19 21:27:11.111305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.666 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.111455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.111491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.111599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.111635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.111765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.111800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.111987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.112037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.112183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.112239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.112407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.112456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.112573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.112612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.112786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.112821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.112953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.112988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.113095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.113142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.113284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.113335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.113441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.113477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.113611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.113646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.113881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.113916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.114082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.114127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.114232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.114266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.114487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.114524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.114663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.114698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.114872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.114909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.115063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.115115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.115231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.115266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.115383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.115417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.115546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.115581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.115727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.115763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.115897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.115931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.116064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.116121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.116257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.116297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.116486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.116536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.116688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.116723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.116894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.116929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.117053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.117096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.117312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.117361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.117574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.117609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.117746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.117780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.667 [2024-11-19 21:27:11.117896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.667 [2024-11-19 21:27:11.117931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.667 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.118087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.118136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.118252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.118290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.118455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.118491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.118623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.118658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.118790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.118825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.118941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.118976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.119111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.119148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.119310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.119360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.119504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.119542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.119717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.119753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.119890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.119925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.120058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.120118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.120264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.120300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.120440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.120475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.120638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.120673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.120842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.120877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.120982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.121017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.121167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.121205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.121337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.121386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.121518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.121567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.121731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.121769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.121931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.121966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.122075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.122111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.122273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.122307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.122419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.122454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.122668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.122703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.122842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.122878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.123009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.123043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.123203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.123253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.123367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.123402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.123547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.123582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.668 qpair failed and we were unable to recover it. 00:37:37.668 [2024-11-19 21:27:11.123756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.668 [2024-11-19 21:27:11.123796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.123911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.123946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.124189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.124224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.124336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.124371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.124510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.124545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.124688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.124724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.124904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.124939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.125051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.125093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.125223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.125258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.125379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.125428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.125563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.125601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.125712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.125748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.125864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.125901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.126074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.126110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.126237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.126287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.126434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.126472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.126580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.126617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.126757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.126793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.126921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.126957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.127084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.127135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.127274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.127308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.127417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.127452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.127578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.127613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.127754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.127791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.127904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.127941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.128042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.128090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.128234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.128269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.128383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.128420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.128523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.128560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.128696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.128731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.128850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.128887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.129019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.129054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.129165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.129204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.129344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.129379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.129488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.129523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.129657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.129694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.129865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.669 [2024-11-19 21:27:11.129901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.669 qpair failed and we were unable to recover it. 00:37:37.669 [2024-11-19 21:27:11.130040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.130098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.130223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.130269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.130388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.130423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.130556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.130597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.130706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.130742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.130851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.130889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.131039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.131081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.131224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.131259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.131365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.131401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.131535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.131570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.131804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.131839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.131977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.132013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.132162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.132213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.132330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.132369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.132505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.132541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.132675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.132710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.132874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.132910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.133094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.133131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.133271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.133307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.133463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.133498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.133613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.133648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.133755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.133790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.133903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.133938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.134081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.134118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.134248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.134298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.134457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.134506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.134623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.134662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.134823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.134858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.134989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.135024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.135166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.135217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.135399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.135435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.135578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.135613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.135747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.135783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.135929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.135964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.136123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.136173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.136315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.670 [2024-11-19 21:27:11.136351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.670 qpair failed and we were unable to recover it. 00:37:37.670 [2024-11-19 21:27:11.136461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.136496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.136658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.136693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.136854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.136890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.136999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.137033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.137206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.137242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.137364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.137403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.137545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.137580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.137719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.137760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.137871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.137905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.138063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.138106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.138239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.138275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.138410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.138445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.138550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.138584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.138722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.138757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.138894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.138929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.139092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.139128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.139231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.139265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.139400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.139435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.139593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.139627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.139733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.139770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.139927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.139964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.140123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.140173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.140285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.140321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.140484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.140519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.140654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.140690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.140795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.140831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.140985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.141035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.141167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.141204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.141362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.141412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.141554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.141590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.141727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.141762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.141868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.141913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.142029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.142064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.142231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.142277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.142405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.142455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.142575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.142613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.142732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.671 [2024-11-19 21:27:11.142768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.671 qpair failed and we were unable to recover it. 00:37:37.671 [2024-11-19 21:27:11.142883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.142919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.143084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.143120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.143247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.143296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.143451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.143487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.143590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.143626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.143761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.143796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.143906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.143941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.144089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.144139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.144286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.144323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.144466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.144504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.144669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.144711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.144813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.144849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.144985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.145020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.145135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.145171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.145298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.145333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.145469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.145504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.145664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.145699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.145833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.145869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.146001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.146036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.146188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.146224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.146359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.146394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.146523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.146558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.146685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.146720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.146844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.146894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.147058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.147117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.147276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.147326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.147444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.147481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.147620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.147657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.147763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.147797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.147937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.147972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.148118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.148159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.148301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.148338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.148476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.148512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.148632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.148668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.672 [2024-11-19 21:27:11.148832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.672 [2024-11-19 21:27:11.148868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.672 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.148973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.149009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.149152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.149189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.149316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.149366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.149482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.149521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.149684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.149721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.149858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.149894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.150044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.150102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.150250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.150288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.150426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.150462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.150568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.150604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.150765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.150801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.150909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.150945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.151130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.151179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.151348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.151385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.151539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.151588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.151735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.151776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.151913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.151950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.152088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.152124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.152218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.152253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.152416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.152451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.152589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.152624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.152762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.152798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.152928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.152963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.153145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.153196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.153320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.153369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.153560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.153609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.153751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.153789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.153901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.153937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.154051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.154092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.154211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.154247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.154383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.154419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.154531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.154566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.673 [2024-11-19 21:27:11.154680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.673 [2024-11-19 21:27:11.154716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.673 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.154882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.154918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.155047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.155103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.155250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.155287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.155444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.155494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.155635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.155673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.155799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.155834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.155965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.156000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.156154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.156191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.156332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.156368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.156531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.156568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.156700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.156735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.156846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.156881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.156988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.157023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.157161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.157196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.157333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.157368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.157478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.157513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.157621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.157655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.157794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.157831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.157971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.158008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.158151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.158187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.158328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.158364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.158499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.158536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.158672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.158712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.158831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.158866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.158980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.159017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.159136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.159172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.159286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.159322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.159438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.159474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.159593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.159628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.159740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.159775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.159880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.159915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.160097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.160147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.160264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.160302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.160441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.160477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.160638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.160674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.160806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.160843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.161013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.161061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.674 qpair failed and we were unable to recover it. 00:37:37.674 [2024-11-19 21:27:11.161171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.674 [2024-11-19 21:27:11.161207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.161364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.161400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.161536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.161571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.161704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.161739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.161872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.161909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.162030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.162086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.162246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.162283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.162409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.162458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.162604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.162642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.162819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.162855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.162958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.163005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.163182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.163219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.163331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.163371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.163491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.163527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.163676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.163711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.163808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.163843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.163989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.164024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.164201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.164238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.164354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.164390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.164527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.164562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.164665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.164700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.164832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.164867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.165007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.165042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.165196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.165232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.165449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.165484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.165619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.165660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.165797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.165833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.165967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.166002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.166166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.166201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.166316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.166353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.166515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.166550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.166735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.166784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.166899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.166935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.167066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.167123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.167292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.167328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.167437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.167472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.167635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.167670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.675 [2024-11-19 21:27:11.167777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.675 [2024-11-19 21:27:11.167812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.675 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.167948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.167983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.168128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.168164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.168284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.168323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.168463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.168500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.168632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.168668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.168801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.168837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.168962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.169012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.169134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.169171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.169302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.169338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.169476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.169512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.169645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.169680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.169808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.169844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.169957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.169995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.170123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.170172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.170277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.170314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.170476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.170511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.170672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.170706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.170809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.170844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.170950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.170986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.171133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.171169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.171315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.171365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.171488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.171535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.171691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.171729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.171865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.171901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.172083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.172132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.172315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.172353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.172520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.172556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.172669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.172710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.172850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.172885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.173020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.173055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.173206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.173242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.173402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.173436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.173546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.173581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.173691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.173727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.173822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.173857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.173987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.174021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.174190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.676 [2024-11-19 21:27:11.174226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.676 qpair failed and we were unable to recover it. 00:37:37.676 [2024-11-19 21:27:11.174365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.174399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.174580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.174630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.174857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.174895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.175007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.175042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.175222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.175258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.175368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.175403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.175566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.175602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.175715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.175751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.175909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.175945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.176095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.176135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.176304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.176338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.176451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.176486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.176585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.176620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.176747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.176785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.176922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.176957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.177097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.177133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.177270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.177304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.177410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.177446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.177587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.177622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.177763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.177799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.177938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.177972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.178123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.178158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.178293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.178327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.178435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.178470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.178618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.178667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.178789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.178834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.178990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.179040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.179186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.179223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.179357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.179392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.179562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.179598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.179760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.179800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.179913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.179948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.180104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.180155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.180309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.180359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.180533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.180573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.677 qpair failed and we were unable to recover it. 00:37:37.677 [2024-11-19 21:27:11.180689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.677 [2024-11-19 21:27:11.180726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.180895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.180930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.181064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.181110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.181219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.181255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.181390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.181425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.181590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.181624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.181798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.181833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.181977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.182017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.182192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.182242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.182385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.182423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.182531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.182566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.182732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.182768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.182906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.182943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.183096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.183146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.183278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.183328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.183499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.183537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.183640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.183676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.183832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.183868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.184030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.184065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.184187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.184224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.184370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.184408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.184547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.184584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.184694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.184730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.184837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.184873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.185004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.185054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.185209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.185248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.185357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.185395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.185534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.185571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.185683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.185719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.185874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.185911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.186051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.186094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.186207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.186243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.186349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.186384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.186514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.186549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.186657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.186691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.186822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.678 [2024-11-19 21:27:11.186857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.678 qpair failed and we were unable to recover it. 00:37:37.678 [2024-11-19 21:27:11.187003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.187039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.187181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.187216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.187337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.187374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.187513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.187549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.187711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.187746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.187860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.187895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.188032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.188082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.188195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.188228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.188330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.188363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.188520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.188553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.188660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.188693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.188827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.188860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.188965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.189000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.189155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.189202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.189348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.189386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.189502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.189537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.189661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.189697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.189801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.189837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.189972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.190007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.190123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.190158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.190299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.190335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.190459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.190508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.190731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.190767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.190945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.190994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.191144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.191181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.191292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.191327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.191468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.191508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.191626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.191661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.191776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.191815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.191945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.191995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.192143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.192194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.192316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.192353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.192517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.192553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.192690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.192725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.192833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.192868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.192976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.193012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.193148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.193198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.679 [2024-11-19 21:27:11.193329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.679 [2024-11-19 21:27:11.193378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.679 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.193526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.193575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.193689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.193727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.193881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.193916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.194018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.194054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.194204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.194240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.194360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.194400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.194510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.194547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.194687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.194723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.194827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.194863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.194995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.195045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.195199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.195236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.195344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.195379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.195520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.195555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.195688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.195723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.195862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.195899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.196122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.196159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.196278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.196314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.196434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.196469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.196599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.196634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.196740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.196775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.196890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.196926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.197052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.197110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.197257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.197294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.197408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.197445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.197555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.197592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.197705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.197741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.197860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.197896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.198007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.198044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.198180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.198222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.198321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.198357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.198465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.198500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.198618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.198666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.198780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.198818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.198924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.198959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.199118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.199154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.199258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.199293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.199424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.680 [2024-11-19 21:27:11.199459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.680 qpair failed and we were unable to recover it. 00:37:37.680 [2024-11-19 21:27:11.199571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.199606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.199738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.199772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.199883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.199920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.200032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.200076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.200249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.200287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.200405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.200441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.200548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.200583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.200696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.200731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.200860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.200895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.201033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.201079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.201193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.201228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.201368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.201404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.201510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.201546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.201659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.201695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.201824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.201860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.202009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.202059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.202196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.202234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.202371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.202406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.202528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.202564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.202695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.202730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.202863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.202897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.203036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.203080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.203208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.203255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.203363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.203400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.203521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.203556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.203668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.203714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.203849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.203884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.204044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.204086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.204228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.204265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.204430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.204470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.204586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.204634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.204747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.204787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.204915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.204951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.205090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.205125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.205255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.205289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.205399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.205435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.205534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.205569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.205704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.681 [2024-11-19 21:27:11.205739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.681 qpair failed and we were unable to recover it. 00:37:37.681 [2024-11-19 21:27:11.205843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.205877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.205987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.206022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.206164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.206199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.206416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.206451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.206580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.206615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.206746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.206781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.206921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.206959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.207121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.207171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.207303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.207352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.207522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.207557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.207721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.207756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.207857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.207892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.208003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.208038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.208150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.208185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.208324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.208359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.208460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.208495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.208607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.208641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.208788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.208824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.208951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.208986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.209098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.209133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.209256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.209291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.209421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.209456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.209568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.209603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.209736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.209771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.209905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.209940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.210104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.210139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.210273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.210308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.210445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.210480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.210587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.210621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.210733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.210768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.210871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.210906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.211041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.211082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.211186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.682 [2024-11-19 21:27:11.211220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.682 qpair failed and we were unable to recover it. 00:37:37.682 [2024-11-19 21:27:11.211355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.211409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.211569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.211619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.211738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.211776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.211894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.211928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.212039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.212083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.212214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.212264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.212441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.212477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.212587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.212622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.212732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.212772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.212903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.212938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.213153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.213188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.213298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.213333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.213473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.213508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.213669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.213704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.213841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.213876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.214021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.214062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.214202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.214239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.214352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.214388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.214498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.214533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.214694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.214729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.214843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.214878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.214989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.215025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.215160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.215208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.215327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.215364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.215474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.215510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.215620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.215656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.215787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.215822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.215942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.215979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.216157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.216208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.216344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.216392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.216505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.216541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.216651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.216685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.216798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.216833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.216972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.217008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.217126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.217162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.217281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.217319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.217450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.217486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.217596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.683 [2024-11-19 21:27:11.217631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.683 qpair failed and we were unable to recover it. 00:37:37.683 [2024-11-19 21:27:11.217767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.217802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.217938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.217973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.218091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.218145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.218264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.218300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.218405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.218440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.218581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.218615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.218719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.218755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.218861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.218895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.219023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.219058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.219215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.219250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.219435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.219472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.219620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.219655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.219773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.219807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.219945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.219979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.220084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.220126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.220233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.220268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.220451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.220488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.220601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.220636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.220776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.220813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.220927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.220961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.221074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.221120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.221235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.221270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.221456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.221491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.221606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.221641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.221778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.221812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.221924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.221958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.222097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.222131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.222248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.222295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.222412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.222448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.222587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.222623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.222759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.222793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.222924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.222960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.223115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.223164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.223280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.223316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.223429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.223463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.223561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.223595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.223690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.223725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.684 [2024-11-19 21:27:11.223838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.684 [2024-11-19 21:27:11.223877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.684 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.224014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.224050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.224196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.224313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.224434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.224473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.224581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.224617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.224740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.224781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.224896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.224933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.225079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.225114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.225220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.225255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.225361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.225396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.225518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.225552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.225657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.225692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.225826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.225862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.225974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.226009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.226131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.226168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.226277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.226312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.226414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.226448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.226561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.226596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.226731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.226766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.226920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.226954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.227081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.227127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.227238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.227275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.227425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.227473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.227589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.227625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.227771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.227807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.227942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.227977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.228089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.228136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.228254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.228289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.228403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.228438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.228540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.228576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.228715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.228751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.228869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.228905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.229030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.229075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.229197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.229234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.229386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.229421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.229551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.229586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.229695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.229730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.229846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.229882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.685 [2024-11-19 21:27:11.230031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.685 [2024-11-19 21:27:11.230088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.685 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.230224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.230261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.230438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.230473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.230578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.230613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.230715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.230750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.230895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.230931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.231099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.231152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.231283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.231340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.231454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.231489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.231598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.231632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.231769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.231804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.231963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.231997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.232131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.232167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.232282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.232340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.232484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.232521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.232673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.232709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.232834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.232870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.233021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.233078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.233239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.233277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.233388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.233425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.233600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.233636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.233756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.233793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.233915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.233951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.234100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.234158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.234276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.234312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.234465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.234501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.234617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.234653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.234792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.234828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.234965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.235001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.235121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.235157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.235268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.235303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.235437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.235472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.235579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.235613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.235747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.235782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.235916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.235966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.236088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.236135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.236241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.236276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.686 [2024-11-19 21:27:11.236413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.686 [2024-11-19 21:27:11.236449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.686 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.236586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.236621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.236754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.236789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.236899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.236935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.237056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.237133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.237290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.237349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.237488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.237525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.237662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.237698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.237803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.237839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.237979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.238014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.238144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.238185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.238305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.238359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.238472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.238511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.238645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.238682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.238816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.238851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.238979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.239015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.239152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.239188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.239292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.239327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.239446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.239480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.239638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.239672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.239785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.239821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.239937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.239976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.240095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.240138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.240247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.240282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.240435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.240471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.240581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.240616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.240748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.240784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.240914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.240950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.241075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.241134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.241252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.241289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.241426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.241461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.241596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.241633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.241772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.241808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.687 [2024-11-19 21:27:11.241953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.687 [2024-11-19 21:27:11.241989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.687 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.242120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.242169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.242326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.242363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.242475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.242511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.242622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.242658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.242777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.242813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.242969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.243018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.243185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.243234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.243389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.243425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.243536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.243571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.243734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.243768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.243932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.243965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.244090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.244136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.244246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.244283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.244431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.244467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.244601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.244636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.244753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.244788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.244922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.244962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.245090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.245146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.245257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.245292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.245425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.245460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.245596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.245631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.245766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.245801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.245933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.245967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.246079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.246125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.246253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.246302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.246456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.246493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.246638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.246674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.246783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.246819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.246973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.247023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.247154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.247191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.247377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.247414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.247564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.247612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.247739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.247781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.247904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.247938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.248045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.248093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.248212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.248247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.248405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.688 [2024-11-19 21:27:11.248440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.688 qpair failed and we were unable to recover it. 00:37:37.688 [2024-11-19 21:27:11.248564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.248596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.248729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.248762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.248887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.248937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.249067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.249131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.249265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.249314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.249495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.249531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.249669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.249704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.249923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.249958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.250076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.250123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.250233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.250269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.250434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.250469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.250600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.250635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.250762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.250811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.250929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.250966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.251113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.251149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.251255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.251290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.251434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.251469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.251606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.251642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.251762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.251798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.251938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.251978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.252115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.252151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.252257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.252292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.252470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.252518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.252638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.252676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.252817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.252853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.252989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.253024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.253160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.253195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.253305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.253350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.253459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.253494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.253596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.253631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.253767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.253801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.253940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.253977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.254139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.254189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.254310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.254354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.254485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.254521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.254631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.254666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.254819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.254868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.255010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.689 [2024-11-19 21:27:11.255046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.689 qpair failed and we were unable to recover it. 00:37:37.689 [2024-11-19 21:27:11.255208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.255257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.255405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.255443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.255601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.255637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.255740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.255775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.255881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.255925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.256057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.256128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.256243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.256279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.256406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.256441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.256551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.256592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.256706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.256739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.256859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.256896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.257002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.257038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.257180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.257229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.257348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.257383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.257522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.257557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.257666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.257702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.257853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.257888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.258018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.258053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.258185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.258219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.258361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.258396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.258527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.258561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.258676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.258719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.258835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.258870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.258984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.259019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.259243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.259279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.259411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.259461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.259608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.259647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.259766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.259802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.259942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.259978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.260112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.260161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.260275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.260311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.260457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.260492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.260652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.260687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.260790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.260824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.260952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.261001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.261158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.261207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.261344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.690 [2024-11-19 21:27:11.261396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.690 qpair failed and we were unable to recover it. 00:37:37.690 [2024-11-19 21:27:11.261539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.261578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.261693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.261728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.261840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.261876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.262032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.262088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.262256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.262305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.262441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.262479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.262591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.262627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.262739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.262774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.262886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.262922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.263038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.263081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.263214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.263253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.263391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.263440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.263566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.263603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.263716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.263751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.263899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.263934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.264036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.264077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.264221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.264256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.264407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.264445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.264553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.264589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.264730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.264766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.264899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.264934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.265073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.265119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.265235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.265271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.265409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.265445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.265558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.265599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.265711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.265747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.265886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.265922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.266050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.266091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.266209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.266256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.266395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.266429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.266566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.266601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.266710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.266746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.266856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.266891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.691 [2024-11-19 21:27:11.267030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.691 [2024-11-19 21:27:11.267064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.691 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.267189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.267223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.267335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.267372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.267478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.267513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.267646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.267681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.267820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.267855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.267971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.268006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.268155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.268192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.268304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.268346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.268479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.268514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.268626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.268661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.268785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.268820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.268984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.269019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.269138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.269175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.269281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.269327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.269441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.269477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.269584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.269620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.269717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.269752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.269869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.269919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.270058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.270105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.270223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.270265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.270383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.270418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.270545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.270580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.270711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.270746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.270888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.270925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.271096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.271135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.271259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.271295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.271401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.271436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.271535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.271569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.271706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.271740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.271856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.271893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.272044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.272086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.272207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.272242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.272355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.272391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.272522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.272557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.272653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.272688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.272792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.272829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.273049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.273097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.692 qpair failed and we were unable to recover it. 00:37:37.692 [2024-11-19 21:27:11.273206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.692 [2024-11-19 21:27:11.273241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.273342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.273377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.273498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.273534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.273670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.273705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.273834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.273868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.273972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.274008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.274145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.274195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.274320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.274356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.274460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.274495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.274630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.274665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.274766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.274801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.274928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.274963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.275119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.275168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.275293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.275333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.275444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.275480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.275641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.275676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.275788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.275823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.275934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.275970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.276090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.276127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.276253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.276302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.276467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.276513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.276629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.276665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.276806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.276841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.276978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.277013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.277132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.277168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.277273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.277309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.277428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.277477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.277623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.277661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.277779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.277815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.277952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.277986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.278094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.278131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.278256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.278306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.278457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.278493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.278629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.278664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.278778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.278813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.278951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.278986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.279098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.279134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.279265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.693 [2024-11-19 21:27:11.279300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.693 qpair failed and we were unable to recover it. 00:37:37.693 [2024-11-19 21:27:11.279459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.279493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.279618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.279652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.279779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.279816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.279966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.280005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.280123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.280159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.280307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.280343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.280448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.280499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.280612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.280660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.280806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.280842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.280956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.280990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.281102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.281137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.281246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.281281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.281430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.281479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.281591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.281627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.281763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.281799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.281909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.281944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.282098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.282147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.282268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.282308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.282449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.282484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.282592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.282629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.282766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.282801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.282927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.282976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.283111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.283156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.283278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.283314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.283447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.283483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.283595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.283630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.283730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.283765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.283874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.283911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.284061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.284116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.284237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.284277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.284447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.284483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.284592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.284628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.284744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.284779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.284889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.284924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.285058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.285104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.285210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.285246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.285357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.285395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.694 [2024-11-19 21:27:11.285527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.694 [2024-11-19 21:27:11.285563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.694 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.285705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.285754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.285901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.285937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.286087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.286136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.286261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.286299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.286408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.286444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.286541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.286576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.286719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.286756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.286922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.286971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.287122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.287159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.287295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.287330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.287436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.287472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.287637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.287672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.287818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.287855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.288033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.288090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.288216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.288265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.288408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.288444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.288576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.288611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.288846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.288881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.289015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.289050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.289204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.289241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.289361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.289409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.289549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.289587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.289694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.289731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.289866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.289901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.290074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.290120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.290259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.290295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.290403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.290438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.290667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.290702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.290832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.290868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.291002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.291037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.291180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.291217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.291327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.291363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.291595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.291631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.291767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.291802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.291966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.292000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.292116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.292153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.292323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.292359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.695 [2024-11-19 21:27:11.292498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.695 [2024-11-19 21:27:11.292533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.695 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.292679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.292714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.292927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.292963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.293084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.293120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.293230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.293266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.293431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.293466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.293626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.293661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.293761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.293796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.293916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.293953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.294062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.294103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.294204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.294239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.294375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.294410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.294543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.294578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.294713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.294748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.294870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.294907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.295041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.295085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.295218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.295254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.295384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.295420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.295567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.295616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.295761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.295798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.295916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.295953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.296058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.296101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.296216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.296251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.296421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.296455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.296567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.296604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.296735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.296770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.296951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.296988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.297099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.297140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.297244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.297279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.297417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.297452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.297564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.297604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.696 [2024-11-19 21:27:11.297766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.696 [2024-11-19 21:27:11.297801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.696 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.297901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.297935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.298067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.298114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.298217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.298252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.298417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.298454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.298587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.298622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.298768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.298804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.298936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.298972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.299115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.299152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.299307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.299356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.299503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.299540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.299651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.299687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.299817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.299852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.300015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.300050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.300214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.300249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.300364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.300399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.300508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.300543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.300660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.300697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.300839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.300876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.301048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.301093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.301202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.301237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.301341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.301378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.301499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.301548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.301696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.301732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.301899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.301935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.302082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.302117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.302226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.302260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.302393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.302428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.302535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.302578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.302688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.302728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.302869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.302906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.303035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.303078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.303215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.303250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.303382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.303430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.303578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.303616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.303756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.303793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.303903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.303944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.304084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.304121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.697 qpair failed and we were unable to recover it. 00:37:37.697 [2024-11-19 21:27:11.304233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.697 [2024-11-19 21:27:11.304268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.304407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.304442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.304580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.304615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.304749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.304784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.304924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.304960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.305096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.305145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.305265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.305310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.305453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.305489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.305651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.305686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.305825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.305859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.305992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.306028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.306148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.306185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.306315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.306365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.306534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.306571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.306730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.306766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.306876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.306910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.307080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.307115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.307228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.307265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.307374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.307411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.307513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.307548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.307656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.307692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.307856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.307891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.308050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.308091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.308227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.308263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.308401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.308438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.308592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.308640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.308756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.308794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.308923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.308958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.309094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.309129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.309254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.309304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.309451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.309489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.309651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.309686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.309820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.309854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.310082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.310118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.310296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.310346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.310515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.310551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.310692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.310727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.310838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.698 [2024-11-19 21:27:11.310873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.698 qpair failed and we were unable to recover it. 00:37:37.698 [2024-11-19 21:27:11.310981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.311026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.311163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.311213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.311374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.311412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.311519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.311554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.311714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.311749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.311913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.311948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.312085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.312125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.312239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.312275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.312419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.312469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.312614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.312652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.312761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.312798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.312943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.312980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.313127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.313164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.313300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.313335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.313444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.313479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.313677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.313714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.313828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.313863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.314027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.314062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.314209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.314244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.314371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.314421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.314535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.314574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.314714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.314751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.314909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.314946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.315100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.315151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.315262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.315300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.315452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.315489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.315653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.315700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.315826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.315875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.316003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.316052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.316215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.316263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.316404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.316441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.316600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.316635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.316765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.316801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.316923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.316960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.317119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.317169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.317301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.317351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.317516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.317568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.699 [2024-11-19 21:27:11.317713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.699 [2024-11-19 21:27:11.317749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.699 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.317912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.317948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.318050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.318091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.318229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.318284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.318425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.318463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.318597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.318632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.318762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.318797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.318941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.318976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.319115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.319153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.319294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.319330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.319472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.319508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.319615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.319650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.319788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.319823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.319950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.320000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.320133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.320170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.320308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.320344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.320512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.320547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.320667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.320702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.320935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.320970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.321151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.321202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.321317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.321355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.321484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.321520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.321621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.321657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.321817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.321851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.322008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.322058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.322188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.322225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.322384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.322433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.322577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.322615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.322731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.322767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.322985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.323021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.323165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.323212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.323389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.323424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.323549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.323584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.323713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.323747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.323910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.323945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.324096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.324146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.324256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.324293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.700 [2024-11-19 21:27:11.324433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.700 [2024-11-19 21:27:11.324468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.700 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.324570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.324606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.324758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.324793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.324906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.324943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.325079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.325115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.325264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.325301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.325401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.325452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.325589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.325625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.325761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.325796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.325933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.325968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.326104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.326139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.326287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.326337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.326523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.326573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.326715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.326751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.326864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.326899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.327036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.327077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.327184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.327219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.327350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.327385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.327524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.327560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.327701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.327736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.327882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.327918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.328107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.328157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.328304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.328342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.328510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.328547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.328651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.328687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.328841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.328890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.329065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.329107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.329234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.329284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.329405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.329442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.329551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.329588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.329750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.329785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.329888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.329923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.330064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.330107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.330251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.330300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.330448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.330485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.330622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.330657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.701 [2024-11-19 21:27:11.330781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.701 [2024-11-19 21:27:11.330816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.701 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.330952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.330987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.331121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.331156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.331296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.331333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.331472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.331508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.331658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.331707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.331810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.331845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.331998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.332046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.332196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.332233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.332346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.332383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.332486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.332527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.332641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.332676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.332817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.332855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.332992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.333028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.333246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.333295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.333467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.333505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.333638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.333675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.333834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.333869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.333979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.334016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.334178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.334228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.334343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.334379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.334540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.334575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.334710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.334746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.334912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.334946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.335108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.335157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.335277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.335313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.335419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.335454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.335567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.335603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.335773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.335808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.335917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.335953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.336100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.336136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.336239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.336273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.336410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.336446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.336552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.336588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.336749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.336784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.336933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.336982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.337145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.337195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.337350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.337390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.337525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.337561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.337697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.337732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.337867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.337904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.702 qpair failed and we were unable to recover it. 00:37:37.702 [2024-11-19 21:27:11.338044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.702 [2024-11-19 21:27:11.338092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.338271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.338310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.338454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.338490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.338597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.338632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.338788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.338823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.338926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.338962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.339119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.339169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.339284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.339320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.339459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.339494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.339600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.339640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.339740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.339775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.339907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.339942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.340081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.340117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.340273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.340323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.340443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.340481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.340643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.340679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.340838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.340874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.340996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.341046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.341232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.341280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.341459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.341497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.341629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.341665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.341823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.341858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.341990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.342027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.342154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.342191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.342329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.342364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.342474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.342510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.342609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.342644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.342794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.342844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.342973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.343022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.343214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.343264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.343418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.343454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.343569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.343605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.343744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.343780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.343888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.343923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.344053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.344112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.344240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.344289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.344415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.344456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.344563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.344599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.344737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.344773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.344904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.344941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.703 [2024-11-19 21:27:11.345097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.703 [2024-11-19 21:27:11.345134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.703 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.345286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.345335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.345477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.345514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.345658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.345694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.345828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.345863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.345997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.346033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.346158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.346194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.346336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.346374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.346510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.346546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.346655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.346697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.346827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.346862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.346987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.347035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.347221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.347270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.347437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.347473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.347628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.347662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.347795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.347830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.347991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.348027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.348141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.348178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.348338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.348387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.348498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.348535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.348672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.348708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.348812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.348858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.349003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.349038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.349197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.349233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.349375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.349412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.349549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.349586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.349715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.349750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.349875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.349911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.350049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.350126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.350288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.350323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.350430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.350466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.350606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.350641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.350757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.350792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.350909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.350959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.351118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.351168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.351347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.351384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.351524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.351560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.351696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.351731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.351866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.351902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.352038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.352080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.352215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.352250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.352376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.352426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.352600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.352638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.352802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.352838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.352940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.704 [2024-11-19 21:27:11.352975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.704 qpair failed and we were unable to recover it. 00:37:37.704 [2024-11-19 21:27:11.353089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.353126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.353279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.353328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.353498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.353534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.353680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.353715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.353865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.353914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.354031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.354066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.354227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.354277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.354424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.354460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.354587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.354622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.354754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.354789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.354891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.354926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.355088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.355138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.355251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.355287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.355390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.355426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.355562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.355597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.355711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.355746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.355881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.355916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.356049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.356097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.356245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.356281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.356399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.356449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.356585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.356622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.356760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.356795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.356895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.356930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.357037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.357081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.357214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.357254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.357400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.357436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.357549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.357584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.357712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.705 [2024-11-19 21:27:11.357747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.705 qpair failed and we were unable to recover it. 00:37:37.705 [2024-11-19 21:27:11.357893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.357928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.358110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.358159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.358275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.358313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.358482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.358521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.358632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.358669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.358811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.358846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.358977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.359012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.359128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.359164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.359302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.359337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.359497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.359532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.359667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.359703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.359814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.359850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.359982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.360019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.360167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.360205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.360346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.360396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.360539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.360575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.360706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.360748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.360892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.360927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.361090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.361126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.361275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.361323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.361459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.361496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.361604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.361640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.361775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.361810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.361974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.362009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.362201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.362251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.362368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.362404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.362567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.362602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.362709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.362745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.362877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.362911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.363043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.363100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.363286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.363324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.706 qpair failed and we were unable to recover it. 00:37:37.706 [2024-11-19 21:27:11.363432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.706 [2024-11-19 21:27:11.363468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.363587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.363623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.363751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.363786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.363947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.363982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.364092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.364128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.364254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.364289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.364393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.364428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.364563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.364598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.364738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.364775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.364926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.364976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.365132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.365181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.365328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.365366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.365509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.365549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.365656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.365691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.365853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.365889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.366053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.366094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.366226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.366261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.366370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.366405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.366562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.366596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.366755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.366805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.366924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.366960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.367098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.367134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.367245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.367280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.367391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.367427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.367535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.367570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.367732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.367768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.367921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.367961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.368077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.368114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.368226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.368263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.368398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.707 [2024-11-19 21:27:11.368433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.707 qpair failed and we were unable to recover it. 00:37:37.707 [2024-11-19 21:27:11.368579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.368614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.368744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.368779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.368939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.368974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.369126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.369162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.369296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.369331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.369467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.369502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.369640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.369675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.369823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.369860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.370007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.370080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.370217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.370266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.370387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.370425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.370536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.370572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.370703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.370739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.370879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.370916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.371054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.371107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.371246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.371287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.371428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.371464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.371601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.371638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.371777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.371813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.371926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.371962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.372112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.372148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.372284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.372319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.372480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.372520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.372653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.372688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.372828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.372862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.373016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.373066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.373229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.373266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.373373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.373408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.373520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.373556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.373658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.373694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.373844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.373895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.374037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.708 [2024-11-19 21:27:11.374081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.708 qpair failed and we were unable to recover it. 00:37:37.708 [2024-11-19 21:27:11.374208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.374257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.374406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.374444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.374585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.374620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.374760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.374795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.374941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.374977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.375097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.375132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.375283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.375333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.375481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.375518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.375632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.375668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.375798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.375834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.375968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.376002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.376166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.376215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.376361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.376398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.376558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.376594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.376702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.376737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.376898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.376933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.377088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.377137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.377284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.377320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.377456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.377491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.377626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.377660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.377798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.377832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.377971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.378006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.378162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.378199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.378329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.378379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.378525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.378562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.378709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.378745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.378865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.378902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.379018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.379077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.379196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.379232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.379394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.379429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.379563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.379604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.709 [2024-11-19 21:27:11.379709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.709 [2024-11-19 21:27:11.379746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.709 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.379890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.379927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.380040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.380085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.380218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.380253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.380426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.380462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.380576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.380612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.380718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.380754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.380916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.380953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.381089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.381125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.381263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.381300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.381411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.381447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.381582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.381618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.381750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.381785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.381902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.381938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.382093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.382144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.382284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.382320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.382456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.382491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.382626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.382661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.382799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.382834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.382971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.383008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.383156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.383193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.383324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.383360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.383498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.383534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.383666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.383701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.710 qpair failed and we were unable to recover it. 00:37:37.710 [2024-11-19 21:27:11.383828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.710 [2024-11-19 21:27:11.383863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.383991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.384028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.384171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.384221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.384382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.384432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.384600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.384637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.384767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.384803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.384940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.384975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.385098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.385135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.385271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.385306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.385442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.385477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.385634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.385669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.385848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.385898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.386044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.386095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.386250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.386287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.386391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.386428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.386541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.386584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.386721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.386757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.386894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.386931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.387078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.387114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.387239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.387274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.387382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.387417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.387550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.387586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.387727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.387762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.387900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.387937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.388099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.388136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.388258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.388307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.388477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.388513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.388646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.388682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.388844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.388878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.388992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.389026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.389206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.389255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.711 [2024-11-19 21:27:11.389382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.711 [2024-11-19 21:27:11.389420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.711 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.389559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.389595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.389706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.389742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.389884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.389921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.390045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.390106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.390261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.390297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.390457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.390492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.390627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.390662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.390771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.390805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.390939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.390975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.391143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.391193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.391353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.391402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.391576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.391614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.391753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.391789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.391950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.391985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.392144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.392193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.392310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.392347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.392507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.392541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.392673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.392708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.392841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.392876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.392992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.393026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.393165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.393199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.393311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.393345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.393484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.393518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.393680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.393718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.393832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.393866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.394042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.394099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.394223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.394261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.394386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.394435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.394602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.712 [2024-11-19 21:27:11.394641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.712 qpair failed and we were unable to recover it. 00:37:37.712 [2024-11-19 21:27:11.394748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.394783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.394961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.394997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.395134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.395170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.395283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.395317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.395459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.395494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.395634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.395669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.395781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.395817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.395976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.396011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.396165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.396202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.396343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.396378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.396516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.396550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.396688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.396723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.396885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.396926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.397030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.397065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.397184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.397219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.397321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.397356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.397488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.397522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.397672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.397707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.397810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.397846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.397982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.398018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.398139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.398174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.398317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.398352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.398487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.398523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.398661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.398695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.398838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.398873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.399012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.399047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.399187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.399222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.399355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.399390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.399550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.399585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.399698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.399733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.399840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.399875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.713 qpair failed and we were unable to recover it. 00:37:37.713 [2024-11-19 21:27:11.399991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.713 [2024-11-19 21:27:11.400026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.400189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.400238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.400383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.400421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.400543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.400585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.400692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.400738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.400880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.400915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.401082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.401118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.401238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.401274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.401381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.401416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.401549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.401584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.401712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.401747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.401904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.401955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.402076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.402115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.402232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.402268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.402431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.402466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.402598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.402634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.402727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.402761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.402865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.402901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.403007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.403043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.403185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.403220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.403360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.403395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.403555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.403591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.403723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.403758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.403873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.403908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.404019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.404054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.404187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.404234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.404339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.404375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.404472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.404506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.404615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.404649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.404788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.404823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.404993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.405029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.405168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.405203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.714 qpair failed and we were unable to recover it. 00:37:37.714 [2024-11-19 21:27:11.405332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.714 [2024-11-19 21:27:11.405367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.405496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.405531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.405660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.405694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.405807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.405842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.405981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.406016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.406158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.406193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.406297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.406332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.406493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.406527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.406687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.406721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.406826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.406862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.407025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.407060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.407173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.407212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.407338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.407373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.407518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.407553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.407685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.407719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.407858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.407893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.408026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.408061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.408210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.408245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.408390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.408425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.408529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.408564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.408723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.408758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.408858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.408893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.408995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.409030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.409147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.409183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.409293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.409328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.409442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.409477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.409607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.409642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.409778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.409813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.409973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.410007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.410150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.715 [2024-11-19 21:27:11.410185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.715 qpair failed and we were unable to recover it. 00:37:37.715 [2024-11-19 21:27:11.410346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.410380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.410494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.410529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.410665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.410699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.410819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.410853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.410988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.411022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.411143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.411178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.411289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.411323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.411463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.411497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.411566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:37:37.716 [2024-11-19 21:27:11.411753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.411802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.411963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.412001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.412123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.412160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.412296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.412330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.412436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.412472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.412586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.412621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.412780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.412817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.412949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.412983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.413094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.413129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.413242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.413276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.413411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.413445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.413555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.413588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.413718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.413754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.413922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.413958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.414108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.414158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.414304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.414339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.716 [2024-11-19 21:27:11.414472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.716 [2024-11-19 21:27:11.414506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.716 qpair failed and we were unable to recover it. 00:37:37.717 [2024-11-19 21:27:11.414631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.717 [2024-11-19 21:27:11.414665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.717 qpair failed and we were unable to recover it. 00:37:37.717 [2024-11-19 21:27:11.414806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.717 [2024-11-19 21:27:11.414840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.717 qpair failed and we were unable to recover it. 00:37:37.717 [2024-11-19 21:27:11.414951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.717 [2024-11-19 21:27:11.414986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.717 qpair failed and we were unable to recover it. 00:37:37.717 [2024-11-19 21:27:11.415126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.717 [2024-11-19 21:27:11.415160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.717 qpair failed and we were unable to recover it. 00:37:37.717 [2024-11-19 21:27:11.415263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.415298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.415434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.415468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.415572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.415605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.415709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.415743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.415861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.415910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.416024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.416067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.416190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.416224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.416330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.416364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.416506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.416541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.416687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.416728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.416836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.416872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.416982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.417015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.417222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.417256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.417361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.417396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.417532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.417566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.417673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.417707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.417851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.417888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.418019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.418055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.418218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.418255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.418440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.418476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.418587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.418621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.418735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.418771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.418914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.418951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.419108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.419157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.419303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.419341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.419449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.419485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.419645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.419680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.419781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.419817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.419936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.419973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.420125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.420175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.011 [2024-11-19 21:27:11.420304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.011 [2024-11-19 21:27:11.420354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.011 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.420503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.420543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.420688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.420725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.420837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.420873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.420988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.421024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.421169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.421205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.421312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.421348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.421502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.421537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.421677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.421711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.421843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.421878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.422012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.422048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.422190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.422240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.422387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.422428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.422592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.422628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.422735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.422770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.422879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.422920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.423055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.423097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.423225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.423275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.423430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.423471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.423591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.423627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.423758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.423794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.423896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.423931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.424062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.424106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.424248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.424284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.424407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.424447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.424588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.424625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.424826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.424864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.425004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.425052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.425227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.425276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.425440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.425477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.425615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.425650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.425809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.425844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.425980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.426016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.426172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.426222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.426363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.012 [2024-11-19 21:27:11.426399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.012 qpair failed and we were unable to recover it. 00:37:38.012 [2024-11-19 21:27:11.426506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.426542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.426681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.426717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.426879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.426914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.427017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.427050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.427198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.427233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.427336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.427371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.427507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.427542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.427735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.427771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.427915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.427950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.428061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.428104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.428214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.428249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.428363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.428398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.428508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.428542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.428673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.428710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.428818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.428853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.429036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.429096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.429247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.429283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.429438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.429488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.429660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.429696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.429857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.429892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.430026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.430066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.430214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.430249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.430383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.430419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.430561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.430597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.430732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.430767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.430897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.430932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.431046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.431087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.431238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.431287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.431452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.431516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.431643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.013 [2024-11-19 21:27:11.431681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.013 qpair failed and we were unable to recover it. 00:37:38.013 [2024-11-19 21:27:11.431874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.431910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.432023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.432059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.432233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.432269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.432402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.432437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.432587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.432627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.432769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.432804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.432968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.433004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.433130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.433166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.433266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.433299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.433432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.433468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.433594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.433628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.433800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.433835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.433944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.433981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.434148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.434184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.434295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.434328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.434456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.434491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.434627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.434661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.434781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.434816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.434951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.434988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.435141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.435191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.435310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.435348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.435510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.435546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.435685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.435721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.435894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.435944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.436058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.436101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.436239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.436275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.436403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.436439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.436568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.436603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.436734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.436769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.436878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.014 [2024-11-19 21:27:11.436914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.014 qpair failed and we were unable to recover it. 00:37:38.014 [2024-11-19 21:27:11.437056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.437109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.437247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.437283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.437394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.437428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.437603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.437636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.437769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.437803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.437969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.438006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.438168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.438218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.438345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.438382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.438546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.438582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.438706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.438743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.438879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.438914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.439066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.439123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.439282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.439332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.439453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.439490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.439606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.439643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.439748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.439784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.439899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.439936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.440054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.440099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.440241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.440280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.440409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.440466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.440640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.440676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.440796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.440832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.440944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.440982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.441146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.441182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.441287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.441321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.441462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.441497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.441635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.441669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.441792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.441829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.441994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.442030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.442177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.015 [2024-11-19 21:27:11.442212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.015 qpair failed and we were unable to recover it. 00:37:38.015 [2024-11-19 21:27:11.442314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.442349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.442478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.442513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.442627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.442661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.442803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.442838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.442952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.442987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.443149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.443199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.443310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.443347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.443460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.443497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.443610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.443644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.443785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.443822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.443956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.444011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.444151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.444210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.444367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.444405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.444539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.444574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.444715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.444749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.444860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.444896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.445014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.445064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.445200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.445240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.445354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.445391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.445508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.445544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.445656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.445692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.445823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.445859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.445976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.446012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.446167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.446207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.446333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.016 [2024-11-19 21:27:11.446370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.016 qpair failed and we were unable to recover it. 00:37:38.016 [2024-11-19 21:27:11.446503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.446550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.446688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.446723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.446845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.446882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.447024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.447061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.447175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.447209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.447320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.447356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.447455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.447490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.447621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.447656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.447792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.447827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.447945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.447982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.448097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.448133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.448237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.448272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.448415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.448452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.448585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.448619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.448733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.448769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.448925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.448961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.449099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.449134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.449245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.449280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.449446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.449481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.449612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.449647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.449749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.449786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.449897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.449933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.450090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.450140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.450258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.450294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.450434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.450469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.450606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.450654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.450793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.450828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.450938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.450975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.451105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.451141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.451286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.451321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.451434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.451469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.451610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.451645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.451786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.017 [2024-11-19 21:27:11.451821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.017 qpair failed and we were unable to recover it. 00:37:38.017 [2024-11-19 21:27:11.451955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.451992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.452178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.452229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.452359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.452409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.452527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.452563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.452723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.452758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.452895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.452931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.453045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.453088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.453202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.453236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.453392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.453432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.453542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.453578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.453716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.453752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.453857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.453892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.454005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.454044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.454169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.454205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.454346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.454382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.454496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.454532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.454671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.454706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.454822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.454864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.454998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.455034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.455189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.455227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.455362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.455398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.455537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.455573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.455685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.455721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.455870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.455919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.456038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.456100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.456236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.456273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.456411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.456447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.456612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.456647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.456789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.456825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.456936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.456973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.457122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.457172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.457294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.457333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.457468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.457509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.018 [2024-11-19 21:27:11.457627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.018 [2024-11-19 21:27:11.457662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.018 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.457772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.457807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.457945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.457981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.458122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.458159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.458286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.458336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.458481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.458518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.458632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.458667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.458811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.458846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.459007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.459057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.459212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.459250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.459364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.459401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.459511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.459547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.459669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.459705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.459857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.459894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.460035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.460081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.460219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.460255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.460363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.460399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.460537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.460572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.460677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.460712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.460836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.460872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.461018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.461067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.461199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.461235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.461341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.461377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.461487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.461522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.461659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.461695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.461837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.461873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.462006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.462044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.462182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.462232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.462356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.462392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.462509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.462555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.462664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.462699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.462805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.462840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.462974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.019 [2024-11-19 21:27:11.463009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.019 qpair failed and we were unable to recover it. 00:37:38.019 [2024-11-19 21:27:11.463151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.463201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.463321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.463360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.463470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.463507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.463616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.463652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.463785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.463821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.463951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.464001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.464122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.464165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.464287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.464322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.464460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.464495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.464598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.464633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.464734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.464769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.464949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.464999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.465132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.465182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.465304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.465354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.465497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.465535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.465667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.465702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.465836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.465871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.466008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.466044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.466178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.466228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.466348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.466387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.466496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.466533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.466664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.466699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.466832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.466868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.466984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.467020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.467141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.467178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.467315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.467350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.467486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.467520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.467626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.467661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.467777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.467817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.467928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.467964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.468130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.468180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.468307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.468345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.468480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.468516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.020 [2024-11-19 21:27:11.468630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.020 [2024-11-19 21:27:11.468667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.020 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.468789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.468826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.468935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.468970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.469080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.469115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.469226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.469260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.469404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.469454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.469566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.469604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.469716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.469753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.469913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.469948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.470102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.470152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.470271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.470308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.470420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.470455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.470590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.470626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.470727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.470769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.470904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.470939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.471089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.471127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.471276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.471326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.471440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.471477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.471611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.471648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.471776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.471810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.471944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.471979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.472103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.472140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.472253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.472300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.472413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.472448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.472575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.472611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.472739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.472774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.472886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.472922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.473048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.473091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.473206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.473241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.021 [2024-11-19 21:27:11.473370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.021 [2024-11-19 21:27:11.473419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.021 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.473565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.473601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.473751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.473788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.473909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.473945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.474049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.474094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.474201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.474234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.474401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.474436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.474559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.474598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.474707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.474744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.474884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.474921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.475058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.475103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.475226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.475262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.475394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.475429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.475566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.475601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.475718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.475753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.475892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.475929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.476096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.476133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.476241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.476277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.476394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.476429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.476542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.476577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.476691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.476727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.476865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.476900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.477048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.477095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.477232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.477267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.477401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.477441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.477551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.477587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.477728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.477764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.477909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.477946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.478065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.478107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.478212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.478247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.478349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.022 [2024-11-19 21:27:11.478384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.022 qpair failed and we were unable to recover it. 00:37:38.022 [2024-11-19 21:27:11.478488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.478523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.478631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.478667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.478803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.478839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.478951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.478987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.479099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.479135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.479270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.479305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.479417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.479454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.479574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.479610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.479744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.479780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.479948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.479983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.480124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.480160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.480272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.480308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.480431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.480480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.480627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.480663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.480799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.480834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.480974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.481011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.481147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.481197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.481311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.481348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.481465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.481502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.481609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.481644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.481756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.481795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.481930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.481964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.482079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.482114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.482256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.482291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.482396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.482431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.482567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.482602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.482731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.482765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.482893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.482943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.483077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.483126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.483250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.483287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.483398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.483433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.483561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.483596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.023 qpair failed and we were unable to recover it. 00:37:38.023 [2024-11-19 21:27:11.483710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.023 [2024-11-19 21:27:11.483744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.483878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.483919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.484033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.484081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.484203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.484241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.484376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.484412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.484519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.484554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.484682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.484731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.484874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.484911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.485021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.485056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.485186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.485221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.485345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.485381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.485491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.485538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.485671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.485706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.485869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.485904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.486032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.486090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.486258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.486295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.486405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.486441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.486549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.486585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.486691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.486726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.486856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.486892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.487006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.487041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.487170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.487220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.487385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.487434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.487580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.487617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.487777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.487813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.487954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.487989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.488106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.488142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.488255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.488291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.488441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.488490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.488635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.488672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.488811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.488848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.488999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.489034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.024 [2024-11-19 21:27:11.489158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.024 [2024-11-19 21:27:11.489196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.024 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.489300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.489336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.489485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.489522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.489652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.489688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.489827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.489861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.489967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.490002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.490143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.490193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.490320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.490370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.490494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.490530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.490637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.490672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.490785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.490821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.490929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.490965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.491103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.491139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.491251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.491286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.491400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.491435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.491578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.491613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.491724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.491758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.491893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.491928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.492037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.492078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.492197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.492233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.492374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.492424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.492560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.492596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.492703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.492738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.492856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.492891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.492993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.493028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.493150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.493185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.493319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.493355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.493488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.493522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.493628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.493664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.493783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.493819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.493955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.493991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.494093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.025 [2024-11-19 21:27:11.494127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.025 qpair failed and we were unable to recover it. 00:37:38.025 [2024-11-19 21:27:11.494236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.494271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.494402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.494452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.494567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.494603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.494751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.494785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.494897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.494937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.495097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.495131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.495245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.495279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.495390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.495427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.495537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.495579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.495691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.495728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.495840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.495875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.496027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.496094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.496221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.496259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.496366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.496402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.496520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.496555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.496673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.496711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.496850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.496885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.497019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.497054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.497186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.497222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.497335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.497369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.497500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.497535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.497674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.497709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.497845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.497880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.497990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.498025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.498188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.498238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.498357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.498396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.498504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.498541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.498674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.498709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.498825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.498862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.498997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.499032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.499151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.499186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.499327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.499362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.499474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.026 [2024-11-19 21:27:11.499508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.026 qpair failed and we were unable to recover it. 00:37:38.026 [2024-11-19 21:27:11.499624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.499659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.499801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.499840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.499977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.500012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.500153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.500204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.500322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.500357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.500475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.500511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.500612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.500647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.500782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.500816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.500953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.500988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.501151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.501188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.501331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.501370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.501480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.501523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.501642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.501678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.501793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.501829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.501942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.501977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.502155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.502206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.502334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.502371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.502475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.502510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.502639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.502675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.502781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.502818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.502980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.503029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.503159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.503196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.503330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.503365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.503500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.503547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.503664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.503699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.503811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.027 [2024-11-19 21:27:11.503847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.027 qpair failed and we were unable to recover it. 00:37:38.027 [2024-11-19 21:27:11.503971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.504006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.504118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.504151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.504259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.504293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.504398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.504432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.504532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.504566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.504671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.504709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.504838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.504887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.505017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.505093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.505218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.505257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.505373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.505408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.505515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.505550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.505690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.505727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.505843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.505879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.506021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.506061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.506191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.506228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.506335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.506370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.506475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.506510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.506617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.506653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.506791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.506827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.506961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.506996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.507118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.507154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.507266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.507301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.507438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.507473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.507613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.507649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.507761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.507797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.507914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.507955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.508076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.508126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.508248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.508285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.508405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.508454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.508602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.028 [2024-11-19 21:27:11.508640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.028 qpair failed and we were unable to recover it. 00:37:38.028 [2024-11-19 21:27:11.508775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.508811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.508911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.508947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.509061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.509112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.509226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.509263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.509400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.509436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.509569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.509605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.509749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.509784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.509923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.509960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.510098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.510135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.510245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.510281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.510419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.510455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.510593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.510629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.510784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.510833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.511004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.511041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.511172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.511221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.511357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.511394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.511513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.511547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.511683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.511718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.511822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.511859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.512011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.512060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.512218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.512254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.512366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.512403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.512552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.512588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.512702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.512737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.512843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.512879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.512992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.513026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.513160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.513199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.513312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.513348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.513486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.513521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.513658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.513693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.513804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.513841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.029 [2024-11-19 21:27:11.513988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.029 [2024-11-19 21:27:11.514038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.029 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.514203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.514253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.514370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.514406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.514520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.514556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.514673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.514714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.514822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.514857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.514994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.515029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.515141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.515176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.515320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.515358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.515477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.515513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.515652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.515686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.515795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.515830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.515959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.515993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.516119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.516168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.516300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.516336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.516496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.516531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.516668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.516703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.516803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.516837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.516982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.517016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.517137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.517173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.517296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.517346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.517517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.517556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.517695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.517732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.517838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.517874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.517976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.518012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.518140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.518177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.518291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.518326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.518453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.518488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.518598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.518632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.518767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.518803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.518940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.518976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.519124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.519161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.519294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.519344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.519461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.519498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.519636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.519672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.519773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.030 [2024-11-19 21:27:11.519808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.030 qpair failed and we were unable to recover it. 00:37:38.030 [2024-11-19 21:27:11.519968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.520017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.520193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.520230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.520352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.520390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.520528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.520563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.520674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.520710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.520814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.520851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.520997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.521047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.521180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.521229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.521348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.521390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.521554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.521589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.521702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.521737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.521906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.521940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.522054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.522100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.522215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.522251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.522379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.522415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.522525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.522561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.522697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.522732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.522865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.522899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.523031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.523067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.523204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.523254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.523370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.523406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.523536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.523572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.523680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.523715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.523822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.523858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.523962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.523998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.524137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.524187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.524331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.524378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.524479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.524516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.524614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.524649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.524803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.524838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.524977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.525013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.031 [2024-11-19 21:27:11.525128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.031 [2024-11-19 21:27:11.525164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.031 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.525270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.525310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.525424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.525461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.525621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.525656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.525798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.525835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.526013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.526048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.526174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.526210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.526322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.526359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.526506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.526556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.526701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.526739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.526853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.526887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.526993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.527029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.527157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.527202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.527322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.527358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.527493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.527529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.527665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.527700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.527809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.527845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.527958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.528001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.528123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.528158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.528274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.528311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.528416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.528451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.528549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.528584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.528728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.528764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.528935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.528972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.529092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.529142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.529309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.529345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.529490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.529525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.529666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.032 [2024-11-19 21:27:11.529701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.032 qpair failed and we were unable to recover it. 00:37:38.032 [2024-11-19 21:27:11.529877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.529912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.530020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.530054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.530201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.530250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.530420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.530459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.530576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.530612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.530749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.530785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.530895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.530930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.531095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.531131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.531271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.531308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.531438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.531474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.531612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.531649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.531747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.531782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.531920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.531955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.532081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.532130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.532254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.532291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.532404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.532446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.532616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.532651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.532794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.532829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.532965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.532999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.533115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.533151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.533262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.533297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.533431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.533466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.533569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.533605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.533745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.533781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.533909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.533958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.534078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.534116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.534235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.534275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.534388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.534424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.534537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.534572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.534707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.534748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.534857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.534894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.535005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.535043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.535188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.535224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.033 [2024-11-19 21:27:11.535326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.033 [2024-11-19 21:27:11.535360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.033 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.535498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.535533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.535641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.535676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.535793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.535829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.535992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.536029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.536182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.536232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.536344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.536382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.536527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.536563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.536684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.536719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.536860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.536896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.537009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.537044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.537169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.537219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.537340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.537377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.537486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.537522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.537660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.537696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.537822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.537858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.537970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.538020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.538172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.538208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.538321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.538356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.538463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.538500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.538635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.538670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.538776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.538811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.538934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.538984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.539146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.539196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.539316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.539354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.539495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.539530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.539699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.539735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.539851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.034 [2024-11-19 21:27:11.539888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.034 qpair failed and we were unable to recover it. 00:37:38.034 [2024-11-19 21:27:11.539998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.540036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.540186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.540226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.540339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.540375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.540512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.540547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.540680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.540716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.540852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.540922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.541063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.541112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.541262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.541298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.541436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.541476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.541611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.541646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.541756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.541791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.541929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.541966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.542130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.542179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.542302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.542351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.542456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.542494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.542614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.542649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.542757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.542792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.542955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.542991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.543130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.543180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.543314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.543363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.543495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.543531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.543644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.543680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.543820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.543855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.543961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.543996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.544137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.544175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.544274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.544310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.544417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.544453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.544607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.544642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.544764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.544814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.544958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.545008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.035 qpair failed and we were unable to recover it. 00:37:38.035 [2024-11-19 21:27:11.545141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.035 [2024-11-19 21:27:11.545178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.545292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.545328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.545463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.545498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.545628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.545663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.545766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.545803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.545957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.546007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.546158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.546220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.546341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.546379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.546514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.546550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.546650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.546685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.546826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.546862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.547023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.547082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.547213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.547262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.547406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.547444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.547583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.547618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.547738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.547774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.547884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.547919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.548045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.548087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.548205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.548245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.548350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.548385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.548525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.548560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.548669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.548704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.548840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.548878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.549008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.549058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.549196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.549234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.549370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.549406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.549569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.549603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.549712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.549747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.549861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.549897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.550028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.550085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.550205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.550243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.550359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.550395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.550537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.550572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.036 [2024-11-19 21:27:11.550703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.036 [2024-11-19 21:27:11.550739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.036 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.550860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.550895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.551045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.551102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.551240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.551290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.551435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.551472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.551580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.551616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.551744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.551779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.551886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.551921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.552054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.552097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.552219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.552269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.552395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.552434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.552539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.552575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.552715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.552751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.552861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.552895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.553018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.553054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.553179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.553217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.553354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.553393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.553528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.553564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.553663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.553698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.553839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.553874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.554006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.554041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.554187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.554236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.554360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.554397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.554508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.554543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.554650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.554685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.554808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.554850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.555011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.555061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.555182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.555219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.555329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.555363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.555520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.555555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.555665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.555699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.555815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.555848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.556000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.037 [2024-11-19 21:27:11.556049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.037 qpair failed and we were unable to recover it. 00:37:38.037 [2024-11-19 21:27:11.556225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.556274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.556430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.556467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.556601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.556636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.556743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.556779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.556896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.556932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.557084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.557134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.557276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.557326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.557466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.557504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.557606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.557642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.557798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.557833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.557942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.557977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.558132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.558182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.558296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.558335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.558447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.558482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.558618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.558653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.558787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.558822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.558961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.558994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.559159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.559196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.559307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.559342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.559454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.559490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.559601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.559636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.559771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.559806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.559959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.560010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.560161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.560199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.560317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.560351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.560464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.560498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.560607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.560642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.038 qpair failed and we were unable to recover it. 00:37:38.038 [2024-11-19 21:27:11.560780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.038 [2024-11-19 21:27:11.560815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.560945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.560980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.561116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.561157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.561314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.561363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.561473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.561510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.561617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.561659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.561801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.561836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.561969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.562004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.562160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.562196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.562302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.562336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.562439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.562473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.562576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.562611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.562747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.562785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.562947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.562996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.563111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.563152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.563272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.563306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.563434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.563470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.563571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.563607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.563729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.563765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.563878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.563912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.564007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.564041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.564155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.564189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.564302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.564336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.564454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.564489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.564594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.564628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.564773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.564822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.564968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.565005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.565152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.565188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.565300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.565335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.565471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.565507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.565670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.565705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.565819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.039 [2024-11-19 21:27:11.565855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.039 qpair failed and we were unable to recover it. 00:37:38.039 [2024-11-19 21:27:11.565995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.566045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.566187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.566225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.566338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.566374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.566510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.566546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.566726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.566775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.566894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.566931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.567086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.567136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.567253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.567289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.567433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.567468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.567574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.567609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.567730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.567765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.567879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.567917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.568029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.568064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.568215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.568270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.568390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.568427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.568562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.568597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.568715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.568750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.568854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.568888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.569052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.569095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.569204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.569240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.569355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.569393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.569509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.569545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.569656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.569690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.569825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.569858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.570035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.570097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.570213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.570251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.570393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.570430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.570538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.570573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.570712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.570746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.570881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.570917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.571027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.040 [2024-11-19 21:27:11.571063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.040 qpair failed and we were unable to recover it. 00:37:38.040 [2024-11-19 21:27:11.571221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.571271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.571419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.571458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.571572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.571609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.571769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.571804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.571941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.571976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.572109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.572159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.572275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.572311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.572450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.572485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.572597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.572633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.572744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.572779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.572885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.572920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.573031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.573067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.573215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.573265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.573389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.573429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.573540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.573576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.573712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.573747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.573881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.573944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.574088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.574124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.574253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.574303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.574485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.574534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.574668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.574704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.574838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.574872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.574991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.575030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.575172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.575222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.575337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.575375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.575479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.575515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.575649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.575686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.575822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.575859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.575973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.576012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.576139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.576189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.576305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.576341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.041 [2024-11-19 21:27:11.576506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.041 [2024-11-19 21:27:11.576540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.041 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.576645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.576680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.576789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.576823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.576980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.577030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.577198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.577248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.577365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.577402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.577515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.577550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.577684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.577719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.577831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.577865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.577992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.578041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.578174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.578223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.578365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.578405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.578521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.578557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.578721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.578757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.578863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.578898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.579008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.579044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.579159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.579199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.579316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.579351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.579508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.579547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.579659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.579695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.579797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.579831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.579950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.579987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.580102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.580138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.580277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.580312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.580450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.580485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.580618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.580653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.580760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.580795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.580908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.580943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.581090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.581125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.581226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.581260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.581390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.581424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.042 [2024-11-19 21:27:11.581562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.042 [2024-11-19 21:27:11.581597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.042 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.581712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.581746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.581879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.581916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.582048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.582115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.582249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.582298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.582470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.582506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.582612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.582647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.582764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.582798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.582936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.582970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.583085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.583128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.583245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.583278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.583414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.583448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.583550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.583584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.583691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.583725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.583842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.583876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.584037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.584099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.584229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.584278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.584397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.584437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.584580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.584616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.584729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.584765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.584873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.584910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.585042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.585083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.585197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.585231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.585341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.585374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.585508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.585542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.585653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.585687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.585801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.043 [2024-11-19 21:27:11.585835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.043 qpair failed and we were unable to recover it. 00:37:38.043 [2024-11-19 21:27:11.585968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.586006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.586147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.586209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.586384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.586424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.586535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.586572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.586682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.586716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.586845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.586880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.587012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.587047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.587217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.587254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.587360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.587394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.587531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.587565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.587668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.587703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.587804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.587838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.587944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.587978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.588130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.588166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.588300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.588339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.588456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.588492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.588603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.588640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.588777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.588812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.588950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.588985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.589095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.589131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.589273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.589309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.589446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.589481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.589582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.589616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.589715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.589750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.589882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.589931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.590050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.590096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.590250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.590286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.590405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.590441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.590571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.590606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.590739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.590779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.590885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.590920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.591078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.591128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.591257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.044 [2024-11-19 21:27:11.591295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.044 qpair failed and we were unable to recover it. 00:37:38.044 [2024-11-19 21:27:11.591435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.591471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.591576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.591611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.591747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.591783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.591893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.591929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.592076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.592113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.592239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.592288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.592430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.592468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.592587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.592629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.592775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.592810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.592946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.592981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.593095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.593144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.593306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.593355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.593524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.593572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.593714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.593752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.593891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.593926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.594054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.594114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.594243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.594279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.594458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.594508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.594620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.594657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.594775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.594810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.594926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.594961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.595098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.595132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.595271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.595306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.595450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.595484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.595619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.595653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.595785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.595819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.595923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.595965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.596091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.596129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.596262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.596295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.045 [2024-11-19 21:27:11.596411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.045 [2024-11-19 21:27:11.596446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.045 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.596546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.596580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.596694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.596728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.596883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.596933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.597059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.597108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.597253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.597302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.597426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.597475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.597616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.597652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.597789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.597825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.597937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.597973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.598113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.598147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.598283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.598317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.598433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.598468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.598575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.598610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.598726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.598760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.598894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.598927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.599028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.599061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.599205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.599239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.599346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.599385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.599501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.599535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.599687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.599735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.599857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.599894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.600036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.600077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.600184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.600220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.600362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.600397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.600505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.600540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.600643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.600679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.600822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.600861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.600996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.601031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.601150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.601186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.601291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.601327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.601504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.601540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.601655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.046 [2024-11-19 21:27:11.601691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.046 qpair failed and we were unable to recover it. 00:37:38.046 [2024-11-19 21:27:11.601798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.601834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.601969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.602002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.602105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.602140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.602247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.602280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.602427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.602461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.602559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.602594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.602707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.602743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.602877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.602913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.603051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.603096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.603235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.603270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.603379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.603413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.603548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.603583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.603698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.603733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.603908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.603957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.604085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.604136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.604263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.604300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.604405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.604441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.604578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.604613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.604741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.604776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.604895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.604931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.605093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.605133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.605243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.605279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.605417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.605451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.605551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.605586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.605693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.605727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.605857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.605898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.606010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.606045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.606160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.606196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.606308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.606343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.606493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.606543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.047 qpair failed and we were unable to recover it. 00:37:38.047 [2024-11-19 21:27:11.606662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.047 [2024-11-19 21:27:11.606700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.606861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.606895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.607012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.607046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.607156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.607190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.607300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.607333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.607438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.607472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.607619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.607653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.607783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.607817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.607919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.607955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.608082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.608119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.608236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.608273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.608377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.608412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.608555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.608589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.608694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.608729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.608836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.608871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.609009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.609043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.609156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.609191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.609297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.609332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.609431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.609465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.609576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.609610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.609776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.609812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.609959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.610008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.610138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.610175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.610291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.610328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.610462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.610497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.610628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.610663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.610765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.610801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.610940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.610977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.611128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.611167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.611282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.611319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.611455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.611491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.611630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.611666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.611770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.611806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.611971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.048 [2024-11-19 21:27:11.612021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.048 qpair failed and we were unable to recover it. 00:37:38.048 [2024-11-19 21:27:11.612147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.049 [2024-11-19 21:27:11.612184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.049 qpair failed and we were unable to recover it. 00:37:38.049 [2024-11-19 21:27:11.612302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.049 [2024-11-19 21:27:11.612343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.049 qpair failed and we were unable to recover it. 00:37:38.049 [2024-11-19 21:27:11.612445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.049 [2024-11-19 21:27:11.612480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.049 qpair failed and we were unable to recover it. 00:37:38.049 [2024-11-19 21:27:11.612618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.049 [2024-11-19 21:27:11.612653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.049 qpair failed and we were unable to recover it. 00:37:38.049 [2024-11-19 21:27:11.612757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.049 [2024-11-19 21:27:11.612794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.049 qpair failed and we were unable to recover it. 00:37:38.049 [2024-11-19 21:27:11.612906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.049 [2024-11-19 21:27:11.612942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.049 qpair failed and we were unable to recover it. 00:37:38.049 [2024-11-19 21:27:11.613082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.049 [2024-11-19 21:27:11.613131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.049 qpair failed and we were unable to recover it. 00:37:38.049 [2024-11-19 21:27:11.613249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.049 [2024-11-19 21:27:11.613284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.049 qpair failed and we were unable to recover it. 00:37:38.049 [2024-11-19 21:27:11.613428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.049 [2024-11-19 21:27:11.613463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.049 qpair failed and we were unable to recover it. 00:37:38.049 [2024-11-19 21:27:11.613598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.049 [2024-11-19 21:27:11.613633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.049 qpair failed and we were unable to recover it. 00:37:38.049 [2024-11-19 21:27:11.613771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.049 [2024-11-19 21:27:11.613805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.049 qpair failed and we were unable to recover it. 00:37:38.049 [2024-11-19 21:27:11.613945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.049 [2024-11-19 21:27:11.613980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.049 qpair failed and we were unable to recover it. 00:37:38.049 [2024-11-19 21:27:11.614093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.049 [2024-11-19 21:27:11.614129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.049 qpair failed and we were unable to recover it. 00:37:38.049 [2024-11-19 21:27:11.614263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.049 [2024-11-19 21:27:11.614298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.049 qpair failed and we were unable to recover it. 00:37:38.049 [2024-11-19 21:27:11.614435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.049 [2024-11-19 21:27:11.614470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.049 qpair failed and we were unable to recover it. 00:37:38.049 [2024-11-19 21:27:11.614602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.049 [2024-11-19 21:27:11.614637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.049 qpair failed and we were unable to recover it. 00:37:38.049 [2024-11-19 21:27:11.614762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.614811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.614945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.614995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.615118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.615155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.615256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.615291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.615428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.615462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.615625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.615660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.615772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.615809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.615923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.615958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.616060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.616109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.616220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.616255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.616363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.616398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.616536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.616571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.616688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.616722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.616848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.616883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.617032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.617090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.617212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.617248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.617357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.617392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.617506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.617541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.617704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.617738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.617890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.617939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.618084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.618120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.618227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.618263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.618399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.618434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.618534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.618568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.618673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.618707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.618805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.618845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.618976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.619025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.619187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.619228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.619332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.619369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.619502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.619537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.619654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.619690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.619795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.619831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.619938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.619973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.050 qpair failed and we were unable to recover it. 00:37:38.050 [2024-11-19 21:27:11.620095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.050 [2024-11-19 21:27:11.620144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.620264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.620301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.620410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.620445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.620574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.620610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.620716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.620752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.620860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.620896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.621016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.621053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.621187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.621236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.621391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.621439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.621605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.621641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.621805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.621841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.621941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.621976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.622114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.622149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.622292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.622332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.622442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.622477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.622589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.622626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.622734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.622770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.622890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.622925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.623040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.623081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.623211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.623260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.623400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.623450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.623558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.623596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.623729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.623764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.623923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.623959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.624082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.624127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.624266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.624312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.624426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.624465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.624580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.624618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.624768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.624803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.624940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.624976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.625090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.625134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.051 [2024-11-19 21:27:11.625276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.051 [2024-11-19 21:27:11.625311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.051 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.625434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.625474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.625639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.625674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.625817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.625851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.625981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.626031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.626199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.626249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.626416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.626465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.626612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.626650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.626759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.626795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.626899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.626935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.627065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.627121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.627245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.627295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.627419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.627457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.627584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.627633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.627739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.627774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.627892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.627927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.628102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.628140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.628254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.628290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.628414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.628451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.628590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.628626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.628744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.628783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.628947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.628983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.629089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.629130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.629233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.629268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.629409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.629459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.629582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.629620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.629776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.629812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.629938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.629985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.630114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.630164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.630293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.630337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.630471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.052 [2024-11-19 21:27:11.630507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.052 qpair failed and we were unable to recover it. 00:37:38.052 [2024-11-19 21:27:11.630642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.630677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.630815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.630851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.630997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.631033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.631156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.631191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.631325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.631373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.631508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.631545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.631708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.631744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.631874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.631909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.632048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.632093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.632214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.632250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.632368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.632411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.632521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.632556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.632661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.632696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.632806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.632841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.632974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.633023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.633167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.633205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.633388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.633424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.633538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.633574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.633715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.633750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.633866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.633904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.634042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.634092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.634213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.634249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.634368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.634403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.634565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.634600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.634735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.634785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.634931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.634967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.635083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.635128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.635236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.635271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.635393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.635428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.635565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.635600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.635705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.635740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.635899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.635934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.053 qpair failed and we were unable to recover it. 00:37:38.053 [2024-11-19 21:27:11.636083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.053 [2024-11-19 21:27:11.636138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.636258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.636297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.636445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.636481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.636600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.636635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.636746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.636782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.636912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.636960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.637087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.637128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.637232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.637267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.637405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.637440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.637583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.637618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.637721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.637756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.637898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.637935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.638041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.638091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.638230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.638266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.638439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.638474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.638581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.638617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.638823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.638858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.638962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.638998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.639113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.639155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.639320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.639369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.639490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.639540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.639705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.639741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.639846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.639881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.640018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.640053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.640197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.640232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.640346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.640381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.640492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.640527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.640664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.640699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.054 [2024-11-19 21:27:11.640807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.054 [2024-11-19 21:27:11.640842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.054 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.640955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.640991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.641160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.641194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.641312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.641351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.641471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.641506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.641696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.641731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.641862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.641897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.642083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.642118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.642222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.642257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.642415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.642450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.642582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.642616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.642785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.642820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.642948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.642982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.643095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.643131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.643266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.643301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.643405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.643440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.643630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.643680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.643798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.643836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.643966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.644002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.644118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.644154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.644277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.644327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.644444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.644481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.644614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.644651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.644757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.644793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.644931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.644966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.645079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.645115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.645229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.645265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.645369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.645404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.055 qpair failed and we were unable to recover it. 00:37:38.055 [2024-11-19 21:27:11.645509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.055 [2024-11-19 21:27:11.645545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.645667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.645702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.645876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.645921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.646032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.646077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.646202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.646237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.646372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.646407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.646537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.646573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.646740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.646776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.646943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.646979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.647159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.647209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.647326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.647362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.647498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.647533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.647641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.647676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.647783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.647825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.647944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.647981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.648159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.648208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.648375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.648424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.648597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.648647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.648760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.648797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.648962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.648997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.649124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.649160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.649270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.649318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.649487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.649537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.649697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.649735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.649918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.649955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.650085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.650130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.650290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.650328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.650489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.650525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.650648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.650686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.650830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.650868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.650986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.056 [2024-11-19 21:27:11.651020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.056 qpair failed and we were unable to recover it. 00:37:38.056 [2024-11-19 21:27:11.651199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.651234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.651346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.651390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.651524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.651558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.651727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.651761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.651897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.651931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.652057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.652100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.652212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.652247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.652369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.652418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.652591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.652627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.652744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.652779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.652942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.652977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.653109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.653165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.653293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.653342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.653522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.653559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.653719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.653755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.653894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.653929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.654043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.654085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.654252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.654288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.654422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.654457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.654596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.654631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.654858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.654893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.655023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.655058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.655197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.655232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.655386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.655435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.655547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.655585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.655737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.655773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.655876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.655911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.656079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.656115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.656224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.656259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.656362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.656397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.656529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.057 [2024-11-19 21:27:11.656564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.057 qpair failed and we were unable to recover it. 00:37:38.057 [2024-11-19 21:27:11.656677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.656712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.656817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.656853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.656976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.657025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.657158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.657207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.657349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.657385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.657530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.657565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.657696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.657731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.657835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.657874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.658052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.658111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.658257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.658295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.658437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.658473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.658620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.658654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.658792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.658826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.658928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.658964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.659077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.659113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.659244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.659279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.659418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.659453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.659584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.659619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.659771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.659820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.659937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.659972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.660125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.660192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.660340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.660377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.660494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.660540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.660680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.660716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.660841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.660876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.660982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.661018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.661160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.661198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.661338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.661374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.661479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.661514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.661649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.661684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.058 [2024-11-19 21:27:11.661856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.058 [2024-11-19 21:27:11.661891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.058 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.662013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.662062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.662228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.662264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.662425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.662461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.662577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.662613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.662773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.662807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.662924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.662974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.663150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.663200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.663362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.663412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.663561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.663599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.663711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.663748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.663886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.663922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.664031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.664066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.664244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.664293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.664451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.664487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.664600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.664635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.664793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.664828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.664974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.665019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.665166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.665216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.665358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.665395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.665506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.665543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.665702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.665738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.665858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.665894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.666078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.666132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.666278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.666322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.666453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.666488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.666629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.666663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.666765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.666800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.666961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.666995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.667112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.667150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.667283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.667324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.667438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.667474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.059 qpair failed and we were unable to recover it. 00:37:38.059 [2024-11-19 21:27:11.667644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.059 [2024-11-19 21:27:11.667680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.667817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.667853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.668013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.668048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.668183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.668219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.668360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.668394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.668528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.668562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.668693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.668728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.668826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.668860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.669016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.669050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.669206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.669242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.669345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.669379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.669510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.669543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.669714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.669751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.669891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.669927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.670076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.670123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.670239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.670274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.670412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.670447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.670586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.670622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.670758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.670794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.670978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.671027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.671161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.671200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.671364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.671399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.671558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.671593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.671751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.671785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.671922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.671958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.672119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.672174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.060 [2024-11-19 21:27:11.672299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.060 [2024-11-19 21:27:11.672343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.060 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.672455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.672489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.672600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.672635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.672766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.672800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.672946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.672981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.673139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.673176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.673304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.673339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.673475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.673511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.673624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.673660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.673795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.673830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.673993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.674028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.674195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.674230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.674392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.674427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.674567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.674616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.674756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.674792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.674898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.674932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.675110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.675145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.675258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.675292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.675467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.675516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.675636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.675673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.675805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.675841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.675973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.676008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.676123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.676158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.676296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.676331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.676443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.676479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.676588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.676623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.676785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.676835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.676993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.677042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.677220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.677269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.677382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.677418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.677566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.061 [2024-11-19 21:27:11.677602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.061 qpair failed and we were unable to recover it. 00:37:38.061 [2024-11-19 21:27:11.677735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.677772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.677874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.677909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.678076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.678112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.678262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.678311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.678451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.678489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.678664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.678699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.678810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.678844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.678997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.679046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.679177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.679219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.679369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.679406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.679546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.679581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.679765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.679813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.679959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.680011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.680162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.680211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.680358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.680395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.680533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.680568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.680678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.680711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.680852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.680888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.680994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.681030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.681172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.681208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.681341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.681376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.681534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.681569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.681733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.681783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.681925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.681960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.682112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.682161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.682281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.682317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.682422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.682458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.682593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.682629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.682739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.682773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.062 [2024-11-19 21:27:11.682929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.062 [2024-11-19 21:27:11.682968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.062 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.683082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.683127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.683287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.683322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.683475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.683508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.683671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.683705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.683807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.683841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.683983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.684017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.684145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.684194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.684350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.684400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.684570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.684606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.684768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.684805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.684943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.684980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.685097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.685133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.685296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.685333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.685470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.685505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.685635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.685670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.685779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.685813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.685969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.686019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.686207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.686245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.686409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.686449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.686630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.686666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.686836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.686883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.686996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.687032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.687186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.687235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.687352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.687392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.687536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.687572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.687706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.687741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.687851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.687885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.688017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.688052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.688227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.688263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.063 qpair failed and we were unable to recover it. 00:37:38.063 [2024-11-19 21:27:11.688375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.063 [2024-11-19 21:27:11.688413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.688551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.688586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.688716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.688750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.688909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.688944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.689049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.689092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.689201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.689234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.689367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.689401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.689502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.689536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.689645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.689681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.689791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.689828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.689951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.690000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.690148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.690184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.690324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.690359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.690495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.690528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.690630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.690664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.690776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.690809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.690924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.690961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.691118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.691168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.691292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.691328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.691460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.691495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.691603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.691638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.691740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.691776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.691892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.691928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.692055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.692094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.692204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.692243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.692367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.692402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.692515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.692551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.692684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.692719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.692826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.692860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.692994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.693050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.693168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.693204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.693335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.693369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.693480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.693513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.064 [2024-11-19 21:27:11.693651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.064 [2024-11-19 21:27:11.693686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.064 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.693794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.693829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.693940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.693976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.694094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.694134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.694249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.694286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.694450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.694486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.694593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.694628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.694742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.694777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.694909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.694943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.695063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.695123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.695248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.695284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.695405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.695441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.695545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.695580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.695682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.695716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.695820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.695855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.695994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.696029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.696242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.696282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.696417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.696485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.696596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.696632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.696760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.696795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.696935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.696972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.697088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.697124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.697238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.697273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.697384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.697420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.697548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.697597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.697724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.697761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.697906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.697943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.698079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.698115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.698225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.698260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.698371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.698406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.698514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.698550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.698692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.698741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.065 qpair failed and we were unable to recover it. 00:37:38.065 [2024-11-19 21:27:11.698872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.065 [2024-11-19 21:27:11.698909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.699021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.699057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.699202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.699237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.699341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.699375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.699509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.699548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.699661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.699696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.699795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.699829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.699933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.699968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.700100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.700137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.700287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.700338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.700488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.700526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.700645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.700679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.700779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.700813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.700942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.700975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.701092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.701129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.701243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.701280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.701433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.701483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.701654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.701691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.701841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.701878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.701988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.702024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.702150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.702186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.702315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.702349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.702483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.702516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.702642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.066 [2024-11-19 21:27:11.702677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.066 qpair failed and we were unable to recover it. 00:37:38.066 [2024-11-19 21:27:11.702813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.702849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.702952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.702989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.703108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.703146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.703266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.703301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.703432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.703468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.703605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.703640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.703748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.703784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.703908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.703944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.704080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.704130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.704279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.704317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.704423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.704459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.704602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.704638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.704746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.704781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.704916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.704953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.705055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.705096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.705196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.705230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.705333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.705367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.705499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.705533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.705637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.705674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.705790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.705827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.705955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.706009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.706154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.706190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.706300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.706334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.067 [2024-11-19 21:27:11.706494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.067 [2024-11-19 21:27:11.706527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.067 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.706650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.706685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.706795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.706831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.706980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.707029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.707168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.707218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.707389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.707425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.707538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.707573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.707708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.707742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.707853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.707888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.708025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.708084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.708245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.708295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.708472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.708510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.708645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.708680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.708792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.708827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.708933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.708968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.709104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.709153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.709277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.709313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.709418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.709453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.709566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.709601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.709713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.709746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.709904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.709938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.710053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.710096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.710246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.710295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.710436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.710474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.710619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.710656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.710798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.710835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.710953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.710994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.711112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.711148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.711255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.711289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.711427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.711460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.711594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.711628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.711732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.711765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.711872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.711908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.712038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.712105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.712232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.712270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.712390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.712426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.712572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.712607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.712720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.712761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.068 qpair failed and we were unable to recover it. 00:37:38.068 [2024-11-19 21:27:11.712903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.068 [2024-11-19 21:27:11.712938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.713046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.713085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.713209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.713242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.713382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.713417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.713527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.713560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.713695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.713728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.713868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.713902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.714006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.714040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.714187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.714236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.714369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.714407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.714530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.714566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.714676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.714711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.714845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.714879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.715011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.715045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.715165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.715199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.715301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.715341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.715453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.715487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.715625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.715658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.715769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.715803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.715917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.715952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.716120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.716154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.716288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.716333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.716451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.716485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.716626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.716660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.716771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.716806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.716934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.716983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.717121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.717170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.717321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.717359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.717471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.717507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.717646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.717681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.717802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.717851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.717974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.718010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.718132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.718166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.718280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.718324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.718457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.718492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.718597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.718632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.718769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.718805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.718931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.718979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.069 qpair failed and we were unable to recover it. 00:37:38.069 [2024-11-19 21:27:11.719141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.069 [2024-11-19 21:27:11.719190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.719339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.719379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.719512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.719548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.719659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.719693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.719830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.719865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.720034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.720077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.720224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.720258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.720369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.720404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.720510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.720545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.720715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.720751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.720866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.720901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.721059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.721108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.721225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.721274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.721413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.721463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.721578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.721616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.721737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.721777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.721922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.721957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.722085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.722129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.722240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.722276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.722398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.722433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.722530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.722565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.722669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.722706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.722823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.722858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.722987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.723022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.723135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.723170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.723281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.723315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.723431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.723465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.723570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.723606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.723736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.723786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.723970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.724020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.724154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.724190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.724298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.724344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.724482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.724517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.724622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.724656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.724834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.724870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.724998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.725047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.725197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.725246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.725367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.070 [2024-11-19 21:27:11.725402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.070 qpair failed and we were unable to recover it. 00:37:38.070 [2024-11-19 21:27:11.725535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.725570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.725704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.725738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.725847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.725881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.726064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.726131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.726266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.726315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.726463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.726499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.726608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.726643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.726768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.726803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.726925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.726975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.727122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.727173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.727299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.727336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.727447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.727483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.727584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.727619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.727747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.727781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.727917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.727953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.728107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.728157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.728279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.728328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.728455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.728493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.728635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.728671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.728778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.728814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.728923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.728958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.729120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.729156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.729263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.729298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.729431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.729465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.729614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.729651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.729814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.729850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.729958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.729993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.730134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.730169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.730288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.730338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.730463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.730501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.730617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.730663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.730834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.071 [2024-11-19 21:27:11.730870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.071 qpair failed and we were unable to recover it. 00:37:38.071 [2024-11-19 21:27:11.730982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.731017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.731129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.731165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.731273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.731307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.731462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.731511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.731629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.731667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.731835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.731872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.732015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.732051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.732195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.732245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.732369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.732408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.732544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.732580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.732717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.732753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.732861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.732902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.733031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.733108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.733256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.733293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.733417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.733453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.733585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.733621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.733729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.733764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.733904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.733940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.734050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.734093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.734250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.734289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.734416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.734451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.734564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.734600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.734713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.734747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.734857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.734892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.735063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.735113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.735272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.735325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.735471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.735508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.735637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.735673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.735784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.735820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.735953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.735988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.736101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.736140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.736251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.736286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.736403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.736438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.736546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.736581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.736686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.736722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.736818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.736852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.736964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.736999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.072 [2024-11-19 21:27:11.737112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.072 [2024-11-19 21:27:11.737147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.072 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.737274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.737323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.737440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.737480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.737599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.737635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.737772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.737807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.737918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.737953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.738114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.738163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.738282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.738324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.738437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.738472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.738603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.738638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.738776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.738811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.738922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.738956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.739093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.739133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.739273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.739307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.739420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.739459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.739592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.739627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.739764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.739799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.739955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.740005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.740155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.740192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.740309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.740348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.740519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.740555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.740694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.740729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.740861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.740896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.741037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.741079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.741227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.741261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.741408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.741443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.741541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.741576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.741687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.741722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.741832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.741866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.742024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.742083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.742243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.742293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.742452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.742490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.742601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.742637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.742776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.742812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.742941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.742992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.743127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.743163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.743290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.743335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.743467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.743513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.073 [2024-11-19 21:27:11.743649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.073 [2024-11-19 21:27:11.743684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.073 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.743811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.743845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.743955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.743990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.744109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.744158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.744321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.744359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.744505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.744542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.744668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.744703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.744818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.744854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.744995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.745030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.745191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.745227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.745344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.745380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.745531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.745581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.745731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.745769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.745902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.745948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.746089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.746133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.746266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.746302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.746445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.746485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.746588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.746623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.746725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.746760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.746865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.746900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.747016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.747050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.747161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.747195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.747317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.747369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.747526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.747576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.747691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.747728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.747847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.747882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.748019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.748054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.748180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.748215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.748328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.748363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.748497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.748531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.748674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.748709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.748815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.748850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.749000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.749039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.749197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.749246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.749363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.749401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.749535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.749571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.749702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.749737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.749849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.749885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.074 [2024-11-19 21:27:11.750018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.074 [2024-11-19 21:27:11.750067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.074 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.750209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.750259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.750405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.750442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.750595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.750631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.750772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.750822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.750981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.751023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.751189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.751225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.751352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.751392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.751542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.751578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.751746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.751781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.751883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.751918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.752035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.752081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.752212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.752247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.752358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.752395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.752534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.752570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.752686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.752735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.752883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.752919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.753078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.753134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.753277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.753322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.753435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.753470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.753632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.753668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.753778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.753813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.753948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.753982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.754113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.754163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.754324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.754363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.754500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.754536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.754665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.754701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.754823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.754873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.755049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.755111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.755233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.755268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.755389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.755423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.755557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.755592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.755756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.755791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.755931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.755965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.756115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.756166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.756287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.756333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.075 qpair failed and we were unable to recover it. 00:37:38.075 [2024-11-19 21:27:11.756462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.075 [2024-11-19 21:27:11.756499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.756610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.756647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.756790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.756826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.756976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.757025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.757182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.757218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.757385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.757434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.757588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.757627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.757791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.757828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.757933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.757969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.758113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.758157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.758327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.758377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.758520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.758556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.758686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.758721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.758882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.758917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.759049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.759099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.759260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.759309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.759458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.759496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.759650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.759687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.759799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.759834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.759972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.760007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.760176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.760225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.760406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.760456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.760580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.760617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.760833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.760869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.761007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.761043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.761191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.761227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.761345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.761396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.761566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.761600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.761755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.761790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.761894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.761928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.762065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.762113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.762242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.762276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.762390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.762426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.762563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.762597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.762792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.762828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.076 qpair failed and we were unable to recover it. 00:37:38.076 [2024-11-19 21:27:11.762968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.076 [2024-11-19 21:27:11.763003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.763120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.763155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.763316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.763360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.763494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.763528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.763715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.763749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.763888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.763923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.764060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.764108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.764241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.764276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.764448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.764484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.764595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.764629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.764786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.764821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.764963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.764998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.765138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.765174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.765312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.765356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.765467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.765507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.765606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.765640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.765781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.765830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.765966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.766015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.766150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.766189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.766355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.766392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.766550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.766587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.766723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.766759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.766882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.766918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.767057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.767109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.767261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.767311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.767452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.767487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.767625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.767662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.767796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.767831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.768014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.768065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.768227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.768265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.768410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.768445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.768544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.768579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.768695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.768730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.077 [2024-11-19 21:27:11.768863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.077 [2024-11-19 21:27:11.768897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.077 qpair failed and we were unable to recover it. 00:37:38.363 [2024-11-19 21:27:11.769019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.363 [2024-11-19 21:27:11.769077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.363 qpair failed and we were unable to recover it. 00:37:38.363 [2024-11-19 21:27:11.769235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.363 [2024-11-19 21:27:11.769284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.363 qpair failed and we were unable to recover it. 00:37:38.363 [2024-11-19 21:27:11.769406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.363 [2024-11-19 21:27:11.769444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.363 qpair failed and we were unable to recover it. 00:37:38.363 [2024-11-19 21:27:11.769582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.363 [2024-11-19 21:27:11.769617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.363 qpair failed and we were unable to recover it. 00:37:38.363 [2024-11-19 21:27:11.769764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.363 [2024-11-19 21:27:11.769800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.363 qpair failed and we were unable to recover it. 00:37:38.363 [2024-11-19 21:27:11.769938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.363 [2024-11-19 21:27:11.769974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.363 qpair failed and we were unable to recover it. 00:37:38.363 [2024-11-19 21:27:11.770112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.363 [2024-11-19 21:27:11.770149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.363 qpair failed and we were unable to recover it. 00:37:38.363 [2024-11-19 21:27:11.770266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.363 [2024-11-19 21:27:11.770304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.363 qpair failed and we were unable to recover it. 00:37:38.363 [2024-11-19 21:27:11.770422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.363 [2024-11-19 21:27:11.770458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.363 qpair failed and we were unable to recover it. 00:37:38.363 [2024-11-19 21:27:11.770591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.363 [2024-11-19 21:27:11.770626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.363 qpair failed and we were unable to recover it. 00:37:38.363 [2024-11-19 21:27:11.770737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.363 [2024-11-19 21:27:11.770773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.363 qpair failed and we were unable to recover it. 00:37:38.363 [2024-11-19 21:27:11.770878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.363 [2024-11-19 21:27:11.770913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.363 qpair failed and we were unable to recover it. 00:37:38.363 [2024-11-19 21:27:11.771025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.363 [2024-11-19 21:27:11.771061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.363 qpair failed and we were unable to recover it. 00:37:38.363 [2024-11-19 21:27:11.771188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.771224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.771365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.771400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.771506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.771542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.771676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.771711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.771824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.771863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.771976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.772012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.772160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.772196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.772362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.772404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.772509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.772545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.772686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.772722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.772829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.772864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.772995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.773033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.773210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.773246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.773386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.773422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.773561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.773596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.773751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.773786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.773923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.773960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.774092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.774153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.774264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.774302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.774431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.774468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.774614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.774650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.774765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.774801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.774902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.774937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.775064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.775115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.775265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.775313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.775467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.775505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.775644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.775681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.775788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.775823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.775951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.775986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.776125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.776162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.776293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.776338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.776458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.776507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.776674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.776711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.776852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.776888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.777029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.777064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.777231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.777266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.777371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.364 [2024-11-19 21:27:11.777406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.364 qpair failed and we were unable to recover it. 00:37:38.364 [2024-11-19 21:27:11.777571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.777606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.777735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.777770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.777905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.777940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.778101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.778148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.778275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.778336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.778504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.778542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.778681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.778717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.778854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.778889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.779001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.779039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.779216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.779253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.779433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.779473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.779601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.779636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.779743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.779778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.779936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.779971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.780116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.780151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.780280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.780322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.780460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.780495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.780654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.780689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.780820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.780855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.781012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.781062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.781246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.781295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.781468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.781506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.781671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.781706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.781917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.781953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.782124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.782159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.782268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.782304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.782550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.782585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.782724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.782759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.782891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.782926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.783055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.783097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.783235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.783270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.783408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.783445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.783589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.783623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.783759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.783795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.783934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.783969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.784129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.784179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.784311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.365 [2024-11-19 21:27:11.784361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.365 qpair failed and we were unable to recover it. 00:37:38.365 [2024-11-19 21:27:11.784556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.784591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.784728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.784763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.784909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.784943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.785077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.785112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.785250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.785284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.785432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.785468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.785573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.785607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.785754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.785791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.785943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.785993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.786160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.786211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.786366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.786402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.786515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.786550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.786692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.786726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.786860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.786903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.787050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.787109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.787269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.787319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.787464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.787503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.787640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.787676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.787838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.787873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.788031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.788089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.788221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.788270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.788417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.788455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.788586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.788622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.788752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.788787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.788896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.788930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.789062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.789103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.789229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.789279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.789405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.789444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.789583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.789620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.789753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.789788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.789910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.789945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.790086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.790136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.790306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.790342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.790480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.790517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.790690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.790725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.790865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.790900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.791034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.366 [2024-11-19 21:27:11.791076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.366 qpair failed and we were unable to recover it. 00:37:38.366 [2024-11-19 21:27:11.791218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.791267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.791413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.791448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.791584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.791618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.791785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.791820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.791943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.791994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.792155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.792195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.792309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.792345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.792479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.792514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.792628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.792664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.792789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.792824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.792966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.793002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.793110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.793145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.793289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.793327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.793483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.793532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.793643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.793681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.793844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.793879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.793994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.794039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.794268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.794303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.794412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.794448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.794618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.794653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.794794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.794829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.794965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.795000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.795100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.795133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.795238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.795273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.795384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.795419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.795555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.795590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.795730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.795764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.795876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.795912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.796053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.796097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.796207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.796241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.796346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.796381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.796523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.796558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.796668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.796702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.796857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.796892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.797001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.797036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.797209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.797244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.797347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.797383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.367 [2024-11-19 21:27:11.797523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.367 [2024-11-19 21:27:11.797559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.367 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.797662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.797696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.797873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.797908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.798012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.798047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.798195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.798230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.798366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.798400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.798551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.798587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.798733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.798783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.798930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.798966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.799092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.799128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.799262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.799297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.799430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.799466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.799630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.799665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.799780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.799825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.799973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.800023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.800156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.800206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.800352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.800392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.800499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.800536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.800673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.800709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.800937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.800978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.801133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.801183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.801329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.801367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.801482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.801518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.801653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.801688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.801797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.801833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.802013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.802064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.802199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.802238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.802380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.802419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.802533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.802571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.802705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.802741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.802853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.802888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.803026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.803063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.368 [2024-11-19 21:27:11.803216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.368 [2024-11-19 21:27:11.803252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.368 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.803389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.803425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.803536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.803572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.803703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.803739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.803878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.803915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.804021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.804057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.804225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.804274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.804390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.804426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.804560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.804595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.804729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.804764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.804899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.804934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.805087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.805137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.805255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.805292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.805452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.805487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.805639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.805675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.805787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.805822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.805961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.805999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.806107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.806143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.806319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.806370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.806493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.806532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.806674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.806711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.806825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.806861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.807003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.807039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.807173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.807222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.807377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.807414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.807551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.807586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.807724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.807760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.807898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.807939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.808105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.808141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.808245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.808282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.808466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.808516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.808687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.808724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.808856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.808891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.809026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.809060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.809206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.809241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.809376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.809412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.809518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.809554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.809691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.809725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.369 [2024-11-19 21:27:11.809834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.369 [2024-11-19 21:27:11.809869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.369 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.810008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.810043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.810181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.810216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.810357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.810393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.810547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.810597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.810712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.810750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.810882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.810918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.811054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.811100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.811259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.811309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.811458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.811494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.811632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.811667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.811805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.811840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.811940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.811975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.812153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.812203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.812347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.812384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.812515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.812550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.812687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.812723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.812858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.812894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.813064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.813110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.813239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.813273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.813373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.813407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.813519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.813551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.813659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.813696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.813832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.813867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.814027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.814063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.814215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.814250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.814355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.814390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.814502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.814538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.814688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.814724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.814887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.814928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.815089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.815124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.815258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.815293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.815399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.815433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.815536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.815572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.815700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.815736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.815840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.815876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.816033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.816092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.816209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.816248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.370 qpair failed and we were unable to recover it. 00:37:38.370 [2024-11-19 21:27:11.816399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.370 [2024-11-19 21:27:11.816436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.816575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.816622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.816760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.816797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.816910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.816945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.817084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.817120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.817259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.817294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.817402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.817436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.817546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.817581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.817743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.817778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.817913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.817948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.818093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.818142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.818264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.818302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.818474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.818510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.818616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.818652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.818787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.818823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.818985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.819021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.819192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.819229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.819331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.819367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.819506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.819542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.819677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.819713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.819849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.819884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.820006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.820056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.820213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.820250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.820373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.820423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.820571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.820609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.820743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.820779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.820920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.820955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.821090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.821137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.821248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.821287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.821425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.821461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.821597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.821632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.821829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.821870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.822004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.822039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.822150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.822185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.822297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.822334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.822507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.822544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.822661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.822697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.822830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.822866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.371 qpair failed and we were unable to recover it. 00:37:38.371 [2024-11-19 21:27:11.822993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.371 [2024-11-19 21:27:11.823029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.823133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.823169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.823280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.823316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.823413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.823448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.823579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.823614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.823719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.823753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.823897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.823932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.824098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.824148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.824292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.824329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.824469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.824505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.824643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.824679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.824817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.824852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.824988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.825024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.825191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.825228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.825368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.825406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.825528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.825565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.825702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.825749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.825889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.825924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.826058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.826099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.826220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.826257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.826428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.826469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.826634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.826669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.826779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.826814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.826920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.826956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.827090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.827125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.827266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.827301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.827439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.827474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.827635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.827670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.827781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.827815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.827946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.827981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.828149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.828185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.828332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.828369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.828496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.828531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.828660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.372 [2024-11-19 21:27:11.828700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.372 qpair failed and we were unable to recover it. 00:37:38.372 [2024-11-19 21:27:11.828862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.828898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.829027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.829063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.829210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.829245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.829379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.829414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.829546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.829581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.829682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.829717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.829825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.829860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.830012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.830062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.830234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.830284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.830398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.830434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.830579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.830614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.830718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.830754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.830893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.830927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.831077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.831127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.831270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.831307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.831475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.831511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.831646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.831681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.831782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.831817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.831952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.832002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.832131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.832168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.832306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.832341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.832444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.832479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.832618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.832654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.832770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.832807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.832955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.833005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.833129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.833167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.833320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.833371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.833489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.833527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.833633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.833670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.833808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.833844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.833957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.833993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.834119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.834154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.834299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.834334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.834462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.834497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.834659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.834694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.834826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.834861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.835000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.835037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.373 qpair failed and we were unable to recover it. 00:37:38.373 [2024-11-19 21:27:11.835176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.373 [2024-11-19 21:27:11.835226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.835357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.835407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.835526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.835567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.835709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.835744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.835898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.835933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.836043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.836086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.836220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.836255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.836386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.836421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.836529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.836564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.836725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.836760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.836896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.836931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.837085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.837135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.837318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.837368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.837512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.837550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.837690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.837726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.837891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.837927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.838039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.838081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.838228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.838265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.838392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.838442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.838568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.838607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.838707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.838743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.838850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.838885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.839021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.839057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.839199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.839234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.839366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.839400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.839504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.839539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.839692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.839727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.839835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.839869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.840001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.840036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.840182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.840233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.840370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.840407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.840538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.840572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.840684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.840719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.840850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.840884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.841000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.841036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.841212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.841248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.841363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.841402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.841535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.374 [2024-11-19 21:27:11.841570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.374 qpair failed and we were unable to recover it. 00:37:38.374 [2024-11-19 21:27:11.841678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.841715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.841848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.841883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.842012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.842047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.842181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.842216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.842320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.842355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.842468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.842503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.842641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.842675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.842838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.842872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.843004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.843040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.843183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.843221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.843338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.843374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.843511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.843546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.843708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.843743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.843841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.843875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.844010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.844045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.844177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.844213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.844316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.844352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.844463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.844499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.844649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.844684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.844819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.844854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.845011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.845047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.845216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.845252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.845416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.845452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.845587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.845623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.845782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.845817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.845919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.845954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.846089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.846125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.846265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.846301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.846412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.846448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.846611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.846646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.846783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.846818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.846932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.846974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.847144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.847179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.847292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.847327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.847437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.847482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.847615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.847650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.847814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.375 [2024-11-19 21:27:11.847850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.375 qpair failed and we were unable to recover it. 00:37:38.375 [2024-11-19 21:27:11.848006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.848056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.848237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.848276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.848412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.848448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.848586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.848622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.848782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.848818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.848930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.848966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.849153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.849213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.849408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.849447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.849571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.849618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.849731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.849766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.849909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.849944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.850109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.850160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.850302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.850351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.850526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.850562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.850707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.850742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.850891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.850941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.851089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.851127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.851276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.851312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.851447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.851482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.851592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.851627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.851790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.851824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.851966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.852002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.852117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.852153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.852279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.852315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.852453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.852488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.852651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.852687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.852798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.852836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.852973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.853009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.853159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.853195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.853329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.853364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.853499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.853535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.853673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.853707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.853808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.853842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.853995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.854032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.854183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.854225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.854366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.854402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.854535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.854571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.854704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.376 [2024-11-19 21:27:11.854740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.376 qpair failed and we were unable to recover it. 00:37:38.376 [2024-11-19 21:27:11.854850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.854885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.855028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.855064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.855210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.855245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.855346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.855381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.855543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.855579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.855742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.855777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.855883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.855917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.856060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.856117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.856283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.856333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.856492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.856529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.856697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.856733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.856874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.856910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.857084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.857134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.857252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.857288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.857433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.857468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.857606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.857640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.857772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.857807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.857966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.858001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.858137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.858175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.858337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.858387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.858513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.858551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.858707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.858744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.858876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.858923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.859067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.859110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.859227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.859263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.859423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.859473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.859594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.859642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.859753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.859789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.859952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.859987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.860113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.860149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.860312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.860347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.860520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.860556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.860743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.860779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.377 qpair failed and we were unable to recover it. 00:37:38.377 [2024-11-19 21:27:11.860923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.377 [2024-11-19 21:27:11.860967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.861119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.861161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.861279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.861312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.861421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.861468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.861605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.861652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.861774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.861810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.861924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.861963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.862128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.862166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.862276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.862311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.862482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.862531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.862674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.862710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.862848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.862883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.863043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.863083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.863195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.863229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.863365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.863399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.863553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.863588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.863718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.863752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.863880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.863943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.864058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.864103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.864239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.864275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.864388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.864423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.864584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.864619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.864780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.864815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.864921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.864957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.865105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.865155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.865298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.865336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.865504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.865540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.865639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.865675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.865788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.865829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.865970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.866005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.866143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.866180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.866372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.866422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.866568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.866606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.866743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.866789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.866926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.866971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.867138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.867188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.867337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.867374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.378 [2024-11-19 21:27:11.867540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.378 [2024-11-19 21:27:11.867576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.378 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.867716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.867751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.867877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.867912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.868049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.868090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.868240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.868277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.868401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.868436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.868552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.868593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.868699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.868734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.868868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.868903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.869017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.869062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.869193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.869229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.869336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.869371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.869506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.869541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.869668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.869703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.869804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.869839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.869967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.870002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.870133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.870183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.870305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.870355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.870480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.870519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.870658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.870694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.870810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.870847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.870977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.871027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.871179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.871215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.871336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.871372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.871480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.871516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.871654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.871688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.871820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.871855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.871954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.871989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.872109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.872150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.872302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.872351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.872498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.872535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.872644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.872679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.872842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.872877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.872995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.873031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.873191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.873227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.873367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.873404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.873510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.873547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.873675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.873711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.379 qpair failed and we were unable to recover it. 00:37:38.379 [2024-11-19 21:27:11.873817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.379 [2024-11-19 21:27:11.873852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.874011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.874047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.874187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.874223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.874337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.874372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.874478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.874513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.874668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.874702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.874812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.874846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.874980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.875015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.875156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.875198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.875333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.875369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.875506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.875541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.875671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.875706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.875842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.875876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.876003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.876037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.876155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.876191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.876330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.876366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.876518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.876568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.876715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.876750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.876885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.876920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.877092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.877142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.877255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.877290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.877430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.877466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.877606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.877640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.877745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.877779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.877891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.877927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.878053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.878117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.878289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.878326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.878460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.878496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.878665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.878701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.878828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.878877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.879016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.879051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.879163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.879198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.879356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.879391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.879533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.879569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.879704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.879739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.879857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.879891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.879991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.880026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.880128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.380 [2024-11-19 21:27:11.880163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.380 qpair failed and we were unable to recover it. 00:37:38.380 [2024-11-19 21:27:11.880300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.880335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.880495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.880530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.880663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.880699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.880811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.880851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.880975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.881026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.881193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.881242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.881395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.881444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.881580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.881616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.881754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.881788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.881951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.881986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.882120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.882160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.882300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.882334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.882494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.882529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.882656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.882691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.882801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.882841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.882982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.883018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.883177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.883227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.883363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.883398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.883557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.883592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.883761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.883795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.883931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.883966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.884095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.884130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.884289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.884339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.884506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.884544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.884709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.884759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.884875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.884922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.885061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.885102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.885237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.885272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.885404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.885439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.885569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.885603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.885740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.885774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.885911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.885946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.886061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.886118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.886270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.886322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.886476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.886515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.886655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.886691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.886804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.886840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.381 [2024-11-19 21:27:11.886959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.381 [2024-11-19 21:27:11.886994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.381 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.887126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.887161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.887326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.887361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.887522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.887557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.887668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.887702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.887841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.887876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.888008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.888043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.888188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.888222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.888425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.888474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.888586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.888624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.888760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.888797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.888907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.888943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.889083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.889125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.889236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.889277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.889443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.889479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.889587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.889622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.889730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.889765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.889901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.889935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.890082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.890118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.890226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.890261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.890403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.890438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.890596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.890632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.890765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.890799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.890964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.891001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.891127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.891163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.891287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.891336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.891482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.891518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.891625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.891660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.891796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.891831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.891965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.892000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.892106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.892142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.892275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.892311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.892440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.892475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.892613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.892648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.892796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.892833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.382 [2024-11-19 21:27:11.892936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.382 [2024-11-19 21:27:11.892971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.382 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.893138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.893175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.893290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.893325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.893462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.893497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.893658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.893693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.893854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.893890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.894003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.894038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.894148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.894183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.894346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.894381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.894511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.894546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.894660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.894694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.894801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.894837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.894998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.895033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.895189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.895240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.895352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.895387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.895541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.895590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.895734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.895771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.895907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.895943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.896053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.896102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.896216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.896252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.896383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.896418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.896584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.896620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.896751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.896787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.896938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.896979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.897116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.897152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.897279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.897314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.897479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.897514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.897661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.897695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.897833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.897869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.898008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.898044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.898176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.898226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.898367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.898416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.898574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.898611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.898771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.898807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.898915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.898950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.899067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.899123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.899234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.899271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.899440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.899480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.899614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.899650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.899756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.899792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.899966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.900002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.383 qpair failed and we were unable to recover it. 00:37:38.383 [2024-11-19 21:27:11.900117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.383 [2024-11-19 21:27:11.900153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.900265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.900302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.900436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.900473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.900634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.900670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.900775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.900810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.900936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.900974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.901141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.901187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.901317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.901367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.901512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.901549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.901678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.901714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.901852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.901888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.902016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.902051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.902215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.902270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.902445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.902481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.902587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.902622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.902728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.902763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.902898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.902933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.903080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.903120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.903260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.903296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.903436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.903471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.903600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.903635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.903766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.903801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.903946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.903985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.904107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.904157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.904302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.904338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.904480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.904516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.904651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.904687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.904790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.904826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.904961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.904997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.905133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.905169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.905296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.905346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.905499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.905536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.905671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.905706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.905817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.905852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.905963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.905998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.906112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.906148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.906283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.906317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.906423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.906457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.906599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.906634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.906743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.906780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.906935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.384 [2024-11-19 21:27:11.906985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.384 qpair failed and we were unable to recover it. 00:37:38.384 [2024-11-19 21:27:11.907164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.907212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.907326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.907365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.907528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.907565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.907714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.907753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.907873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.907909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.908038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.908097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.908249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.908285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.908451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.908487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.908619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.908654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.908792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.908826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.908932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.908970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.909135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.909172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.909307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.909342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.909479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.909515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.909667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.909716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.909888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.909926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.910039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.910093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.910258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.910293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.910425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.910460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.910571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.910607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.910774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.910810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.910933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.910982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.911125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.911164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.911302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.911339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.911482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.911519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.911700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.911749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.911891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.911926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.912094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.912129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.912236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.912271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.912407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.912442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.912616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.912651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.912785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.912821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.912953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.912987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.913139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.913189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.913309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.913346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.913511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.913546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.913674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.913710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.913809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.913845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.913993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.914042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.914185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.914220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.914404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.385 [2024-11-19 21:27:11.914455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.385 qpair failed and we were unable to recover it. 00:37:38.385 [2024-11-19 21:27:11.914624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.914662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.914803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.914840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.914977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.915013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.915167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.915204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.915358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.915407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.915516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.915554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.915667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.915703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.915840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.915875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.915998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.916048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.916187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.916236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.916410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.916447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.916583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.916618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.916755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.916792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.916929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.916964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.917078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.917115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.917231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.917287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.917458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.917496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.917612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.917649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.917751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.917785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.917946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.917980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.918107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.918143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.918277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.918314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.918428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.918465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.918627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.918662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.918837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.918873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.919012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.919059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.919220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.919270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.919413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.919451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.919553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.919589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.919700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.919735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.919847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.919895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.920035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.920080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.920214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.920249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.920411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.920446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.920583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.920618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.920726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.920761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.386 [2024-11-19 21:27:11.920905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.386 [2024-11-19 21:27:11.920941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.386 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.921091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.921127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.921239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.921275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.921411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.921445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.921547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.921582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.921687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.921721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.921857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.921893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.922031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.922088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.922249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.922299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.922442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.922478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.922594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.922629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.922735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.922769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.922899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.922934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.923080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.923118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.923252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.923287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.923420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.923455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.923586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.923621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.923786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.923821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.923934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.923974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.924141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.924182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.924319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.924353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.924489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.924524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.924651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.924686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.924839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.924888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.925008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.925044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.925190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.925225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.925360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.925396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.925506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.925542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.925691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.925740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.925856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.925891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.926035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.926081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.926220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.926256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.926359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.926395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.926508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.926543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.926703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.926739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.926883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.926923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.927037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.927081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.927190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.927226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.927364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.927399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.927534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.927570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.927677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.927713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.927874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.927910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.928033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.928091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.928253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.928303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.928415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.387 [2024-11-19 21:27:11.928452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.387 qpair failed and we were unable to recover it. 00:37:38.387 [2024-11-19 21:27:11.928619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.928655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.928826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.928866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.929042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.929088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.929226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.929276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.929387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.929425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.929560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.929596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.929728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.929763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.929880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.929916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.930041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.930101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.930219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.930258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.930366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.930404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.930569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.930604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.930708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.930745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.930845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.930881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.931005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.931054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.931221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.931257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.931358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.931394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.931556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.931591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.931699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.931735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.931871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.931906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.932066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.932114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.932220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.932256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.932395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.932433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.932541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.932577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.932732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.932781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.932923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.932961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.933095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.933131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.933265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.933299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.933445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.933480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.933613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.933648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.933786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.933823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.933932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.933968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.934101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.934138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.934267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.934303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.934463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.934499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.934666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.934701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.934838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.934873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.935034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.935076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.935206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.935241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.935350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.935385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.935520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.935556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.935694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.935735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.935896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.935933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.936067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.388 [2024-11-19 21:27:11.936109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.388 qpair failed and we were unable to recover it. 00:37:38.388 [2024-11-19 21:27:11.936214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.936249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.936386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.936420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.936530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.936565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.936696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.936731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.936896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.936933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.937034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.937078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.937218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.937254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.937357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.937392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.937529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.937564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.937727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.937763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.937900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.937937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.938061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.938118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.938276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.938325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.938445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.938482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.938623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.938657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.938767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.938802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.938908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.938943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.939125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.939175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.939315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.939365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.939522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.939571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.939682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.939720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.939836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.939873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.940039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.940090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.940202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.940237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.940386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.940431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.940546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.940583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.940719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.940754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.940855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.940888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.941046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.941105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.941251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.941289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.941399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.941434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.941565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.941600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.941715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.941750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.941917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.941954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.942097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.942146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.942261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.942299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.942409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.942444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.942583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.942624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.942760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.942795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.942904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.942939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.943056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.943100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.943211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.943246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.943374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.943410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.943514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.389 [2024-11-19 21:27:11.943549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.389 qpair failed and we were unable to recover it. 00:37:38.389 [2024-11-19 21:27:11.943675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.943710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.943879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.943915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.944027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.944063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.944208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.944243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.944415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.944464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.944609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.944648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.944763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.944800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.944950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.944986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.945152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.945200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.945312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.945348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.945484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.945521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.945632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.945667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.945804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.945839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.945979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.946016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.946129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.946165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.946320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.946369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.946488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.946526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.946639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.946676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.946815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.946850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.946967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.947003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.947125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.947161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.947269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.947303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.947411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.947447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.947579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.947615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.947758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.947808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.947920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.947957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.948113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.948148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.948258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.948293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.948422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.948457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.948574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.948609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.948720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.948757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.948892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.948928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.949037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.949078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.949210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.949250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.949361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.949396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.949504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.949539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.949667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.949703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.949810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.949846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.949984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.950019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.950154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.950189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.950326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.950360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.390 [2024-11-19 21:27:11.950464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.390 [2024-11-19 21:27:11.950500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.390 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.950611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.950647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.950759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.950794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.950904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.950940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.951091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.951128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.951248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.951283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.951413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.951463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.951581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.951616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.951728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.951763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.951895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.951930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.952065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.952111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.952214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.952249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.952358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.952394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.952530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.952565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.952681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.952730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.952877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.952913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.953023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.953058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.953181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.953216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.953328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.953364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.953505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.953539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.953644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.953680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.953811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.953846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.953954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.953990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.954100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.954135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.954263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.954313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.954452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.954490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.954631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.954666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.954799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.954834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.954948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.954983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.955102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.955138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.955242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.955277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.955432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.955468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.955602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.955641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.955779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.955815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.955951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.955987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.956092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.956129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.956236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.956276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.956412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.956447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.956615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.956649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.956767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.956807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.956916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.956962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.957082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.957117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.957248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.957283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.957415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.957449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.957585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.957620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.957734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.957770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.957904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.391 [2024-11-19 21:27:11.957953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.391 qpair failed and we were unable to recover it. 00:37:38.391 [2024-11-19 21:27:11.958164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.958203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.958362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.958398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.958533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.958568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.958693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.958728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.958844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.958878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.958998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.959048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.959190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.959239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.959385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.959421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.959563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.959598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.959727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.959763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.959879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.959914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.960022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.960056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.960193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.960232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.960342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.960378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.960537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.960572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.960725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.960760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.960872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.960907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.961027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.961063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.961184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.961220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.961374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.961424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.961578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.961618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.961725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.961761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.961864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.961900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.962015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.962064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.962193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.962231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.962396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.962436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.962545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.962589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.962707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.962741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.962848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.962894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.963051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.963111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.963260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.963298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.963407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.963443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.963555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.963592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.963703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.963738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.963889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.963938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.964048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.964097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.964208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.964245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.964379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.964414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.964524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.964559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.964682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.964717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.964880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.964917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.965059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.965105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.965245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.965281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.965413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.965448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.965560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.965595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.965730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.965764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.965873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.392 [2024-11-19 21:27:11.965908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.392 qpair failed and we were unable to recover it. 00:37:38.392 [2024-11-19 21:27:11.966010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.966045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.966213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.966248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.966352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.966392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.966504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.966540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.966682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.966717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.966864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.966899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.967025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.967082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.967208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.967248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.967392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.967429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.967540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.967576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.967697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.967746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.967894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.967931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.968090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.968140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.968260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.968297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.968435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.968470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.968581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.968617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.968751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.968786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.968891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.968926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.969065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.969114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.969247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.969296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.969446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.969483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.969599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.969635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.969776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.969810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.969955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.970005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.970177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.970215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.970325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.970361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.970522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.970557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.970670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.970706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.970852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.970889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.971017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.971067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.971216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.971266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.971415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.971452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.971577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.971614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.971737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.971772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.971906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.971942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.972098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.972149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.972271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.972310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.972444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.972479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.972616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.972651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.972800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.972835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.972951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.972986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.973106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.973142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.973291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.973330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.973452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.973489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.393 [2024-11-19 21:27:11.973629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.393 [2024-11-19 21:27:11.973665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.393 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.973809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.973845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.973962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.973998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.974125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.974161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.974279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.974316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.974450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.974485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.974590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.974625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.974746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.974782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.974889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.974925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.975028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.975063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.975216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.975253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.975375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.975424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.975548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.975585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.975695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.975732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.975879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.975921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.976054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.976110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.976222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.976258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.976369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.976406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.976513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.976549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.976652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.976688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.976846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.976880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.976991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.977026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.977176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.977215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.977331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.977367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.977505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.977540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.977647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.977682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.977813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.977848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.977988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.978024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.978169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.978204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.978317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.978353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.978501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.978550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.978681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.978718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.978875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.978910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.979029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.979065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.979223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.979259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.979382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.979417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.979564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.979599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.979721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.979756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.979916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.979953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.980082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.980131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.980249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.980285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.980396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.980432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.980541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.980577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.980711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.980747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.980880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.980915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.394 qpair failed and we were unable to recover it. 00:37:38.394 [2024-11-19 21:27:11.981031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.394 [2024-11-19 21:27:11.981067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.981211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.981258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.981373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.981410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.981550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.981586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.981744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.981793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.981940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.981976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.982109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.982159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.982285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.982322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.982461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.982509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.982623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.982664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.982802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.982839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.982959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.982998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.983116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.983153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.983282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.983322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.983464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.983501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.983617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.983652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.983762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.983798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.983905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.983940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.984088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.984132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.984236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.984271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.984441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.984477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.984591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.984626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.984739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.984775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.984889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.984925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.985044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.985089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.985211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.985248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.985396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.985432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.985540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.985577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.985687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.985722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.985860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.985896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.986008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.986044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.986192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.986228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.986339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.986374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.986501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.986537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.986651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.986686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.986825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.986862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.986994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.987043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.987168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.987205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.987320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.987355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.987490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.987526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.987661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.987697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.987812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.987848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.987965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.988002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.988101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.988138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.988244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.988279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.988418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.988453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.988562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.988597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.988719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.988755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.988902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.988938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.989092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.989148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.989263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.395 [2024-11-19 21:27:11.989300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.395 qpair failed and we were unable to recover it. 00:37:38.395 [2024-11-19 21:27:11.989409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.989445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.989583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.989619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.989737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.989773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.989909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.989960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.990101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.990138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.990250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.990286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.990425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.990460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.990568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.990604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.990745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.990780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.990877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.990912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.991049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.991095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.991238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.991273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.991441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.991476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.991593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.991628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.991743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.991778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.991918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.991956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.992076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.992112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.992268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.992319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.992476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.992512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.992626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.992661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.992775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.992810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.992911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.992946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.993049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.993102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.993233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.993269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.993377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.993412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.993553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.993588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.993697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.993733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.993872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.993907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.994082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.994132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.994259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.994310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.994473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.994523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.994671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.994709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.994835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.994872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.995020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.995056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.995174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.995209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.995324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.995359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.995461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.995497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.995615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.995652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.995792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.995832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.996001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.996041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.996183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.996220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.996347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.996382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.996530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.996566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.996703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.996738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.996857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.996892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.997036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.997080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.997205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.997240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.997360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.997395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.997550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.396 [2024-11-19 21:27:11.997586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.396 qpair failed and we were unable to recover it. 00:37:38.396 [2024-11-19 21:27:11.997701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:11.997736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:11.997859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:11.997894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:11.998030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:11.998066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:11.998204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:11.998239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:11.998407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:11.998457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:11.998647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:11.998685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:11.998811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:11.998848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:11.998980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:11.999016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:11.999154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:11.999191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:11.999353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:11.999403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:11.999565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:11.999602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:11.999778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:11.999814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:11.999941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:11.999977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.000096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.000137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.000273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.000322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.000492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.000530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.000653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.000690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.000803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.000840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.000971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.001021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.001155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.001194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.001336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.001372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.001539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.001576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.001719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.001754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.001900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.001936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.002063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.002105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.002228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.002265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.002367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.002402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.002506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.002543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.002658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.002693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.002795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.002835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.002965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.003016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.003155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.003205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.003382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.003420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.003553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.003588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.003722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.003758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.003871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.003907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.004047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.004095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.004247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.004297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.004415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.004452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.004592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.004628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.004780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.004816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.004936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.004986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.005115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.005152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.005280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.005342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.005461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.005499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.005619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.005656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.005766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.005802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.397 [2024-11-19 21:27:12.005928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.397 [2024-11-19 21:27:12.005978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.397 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.006137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.006188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.006309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.006347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.006516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.006553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.006721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.006758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.006868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.006905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.007044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.007088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.007216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.007266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.007399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.007449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.007592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.007629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.007749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.007786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.007902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.007938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.008082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.008119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.008232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.008269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.008378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.008417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.008579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.008616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.008722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.008759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.008877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.008912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.009085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.009125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.009238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.009275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.009421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.009458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.009565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.009602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.009741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.009783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.009893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.009929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.010085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.010136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.010270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.010309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.010429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.010466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.010581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.010617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.010716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.010752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.010888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.010923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.011030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.011066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.011215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.011255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.011366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.011402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.011536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.011572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.011674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.011710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.011817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.011852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.011970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.012007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.012135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.012172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.012291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.012328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.012462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.012499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.012601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.012638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.012744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.012779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.012941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.012976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.013092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.013129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.013238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.013275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.013423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.013474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.013623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.013660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.013786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.013821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.013961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.013996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.014113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.014152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.014264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.014301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.014420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.014457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.014577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.014613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.014744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.398 [2024-11-19 21:27:12.014780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.398 qpair failed and we were unable to recover it. 00:37:38.398 [2024-11-19 21:27:12.014917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.014953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.015122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.015159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.015288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.015338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.015482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.015520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.015621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.015658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.015793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.015829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.015990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.016040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.016197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.016235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.016346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.016382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.016527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.016563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.016677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.016714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.016856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.016892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.017031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.017078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.017202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.017253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.017390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.017426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.017567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.017603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.017733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.017769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.017909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.017945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.018098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.018148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.018269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.018308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.018445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.018482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.018587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.018623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.018747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.018783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.018910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.018946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.019118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.019156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.019271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.019308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.019454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.019491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.019633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.019681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.019789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.019825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.019978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.020028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.020150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.020188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.020315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.020366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.020487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.020525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.020688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.020725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.020832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.020868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.020987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.021030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.021156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.021196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.021311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.021348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.021485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.021521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.021687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.021723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.021860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.021896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.022008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.022045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.022174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.022211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.022324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.022376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.022519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.022556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.022671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.022707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.022842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.022878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.023008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.023058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.023196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.023246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.023401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.023439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.023578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.023615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.023725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.023762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.023877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.023913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.399 [2024-11-19 21:27:12.024056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.399 [2024-11-19 21:27:12.024105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.399 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.024225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.024265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.024419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.024468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.024617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.024655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.024761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.024797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.024923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.024959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.025096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.025133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.025252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.025289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.025432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.025468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.025583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.025620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.025757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.025793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.025936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.025971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.026086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.026124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.026264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.026314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.026472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.026521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.026647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.026685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.026825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.026861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.027012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.027049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.027194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.027232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.027348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.027383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.027497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.027534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.027678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.027715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.027876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.027919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.028093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.028143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.028274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.028324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.028495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.028531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.028667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.028703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.028819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.028856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.028969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.029005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.029141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.029178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.029329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.029379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.029535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.029575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.029689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.029726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.029834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.029870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.030010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.030046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.030192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.030228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.030381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.030418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.030533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.030569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.030726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.030776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.030897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.030933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.031038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.031080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.031215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.031250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.031383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.031419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.031527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.031563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.031674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.031710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.031836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.031872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.031986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.032021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.032167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.032205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.032317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.032353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.032493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.400 [2024-11-19 21:27:12.032529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.400 qpair failed and we were unable to recover it. 00:37:38.400 [2024-11-19 21:27:12.032635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.032672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.032806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.032842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.032995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.033046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.033157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.033194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.033320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.033369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.033518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.033556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.033696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.033733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.033887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.033924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.034038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.034083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.034192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.034228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.034350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.034387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.034543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.034578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.034710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.034751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.034862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.034898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.035014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.035050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.035208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.035258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.035370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.035407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.035521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.035557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.035685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.035721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.035830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.035866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.035976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.036012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.036134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.036170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.036310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.036345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.036484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.036519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.036662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.036698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.036812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.036848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.036982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.037032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.037160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.037197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.037313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.037349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.037491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.037526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.037660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.037696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.037812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.037847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.037979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.038029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.038208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.038248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.038369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.038406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.038543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.038578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.038714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.038750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.038860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.038896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.039023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.039083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.039263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.039314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.039428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.039465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.039604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.039639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.039778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.039814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.039940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.039991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.040143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.040182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.040311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.040362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.040508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.040546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.040683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.040721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.040831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.040868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.040987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.041038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.041182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.041231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.041373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.041409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.041575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.041616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.041771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.041807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.041944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.041980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.401 [2024-11-19 21:27:12.042107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.401 [2024-11-19 21:27:12.042158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.401 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.042290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.042340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.042484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.042522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.042634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.042670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.042812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.042848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.042966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.043017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.043154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.043193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.043309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.043350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.043499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.043535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.043651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.043688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.043798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.043834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.043972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.044022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.044153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.044192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.044311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.044348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.044482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.044519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.044658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.044694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.044809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.044845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.044954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.044990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.045108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.045145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.045251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.045288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.045448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.045484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.045610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.045661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.045789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.045828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.045937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.045975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.046124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.046161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.046299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.046335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.046449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.046485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.046594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.046630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.046742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.046777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.046905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.046954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.047089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.047128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.047241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.047278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.047386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.047422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.047543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.047581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.047694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.047730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.047896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.047932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.048080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.048116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.048235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.048291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.048439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.048477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.048594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.048630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.048748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.048783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.048933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.048969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.049089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.049138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.049260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.049299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.049411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.049447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.049582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.049618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.049763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.049800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.049938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.050008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.050153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.050191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.050299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.050335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.050468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.050503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.050625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.050661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.050776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.050814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.050952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.050987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.051135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.051185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.051353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.051392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.051510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.402 [2024-11-19 21:27:12.051546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.402 qpair failed and we were unable to recover it. 00:37:38.402 [2024-11-19 21:27:12.051654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.051689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.051825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.051862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.052009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.052059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.052184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.052220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.052359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.052395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.052499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.052535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.052642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.052678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.052801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.052838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.052950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.052987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.053100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.053140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.053255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.053292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.053412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.053448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.053584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.053621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.053731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.053768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.053892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.053930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.054077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.054115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.054253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.054289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.054399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.054435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.054574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.054609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.054751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.054787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.054924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.054967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.055082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.055120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.055224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.055259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.055398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.055434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.055571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.055607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.055717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.055753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.055863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.055899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.056060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.056105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.056211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.056247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.056370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.056420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.056561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.056599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.056720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.056757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.056856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.056892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.057005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.057054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.057239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.057277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.057415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.057451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.057614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.057650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.057754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.057790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.057936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.057973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.058126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.058177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.058291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.058330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.058592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.058628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.058762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.058798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.059016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.059052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.059171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.059206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.059322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.059372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.059532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.059581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.059699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.059736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.059875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.059911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.060056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.060099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.060240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.060275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.060413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.060449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.060643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.060679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.060818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.060854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.060991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.403 [2024-11-19 21:27:12.061029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.403 qpair failed and we were unable to recover it. 00:37:38.403 [2024-11-19 21:27:12.061145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.061181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.061334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.061384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.061532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.061568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.061728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.061764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.061874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.061911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.062025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.062066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.062218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.062254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.062438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.062488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.062609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.062649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.062753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.062790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.062927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.062962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.063127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.063164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.063275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.063310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.063474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.063509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.063653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.063691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.063793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.063829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.063999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.064036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.064185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.064223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.064331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.064367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.064631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.064666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.064837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.064873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.065007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.065043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.065165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.065202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.065343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.065378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.065515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.065551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.065685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.065720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.065900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.065937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.066080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.066131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.066279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.066316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.066458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.066494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.066657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.066693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.404 [2024-11-19 21:27:12.066857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.404 [2024-11-19 21:27:12.066892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.404 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.067054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.067098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.067277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.067328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.067457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.067497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.067661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.067698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.067860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.067896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.067999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.068035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.068179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.068216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.068317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.068354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.068455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.068491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.068593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.068628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.068781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.068831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.068955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.068993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.069131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.069175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.069287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.069329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.069441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.069477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.069647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.069684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.069797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.069834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.069975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.070010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.070217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.070253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.070362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.070397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.070527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.070562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.070675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.070710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.070821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.070858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.070990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.071025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.071188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.071237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.071352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.071388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.071519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.071554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.071695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.071731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.071862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.071897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.072000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.072035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.072213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.405 [2024-11-19 21:27:12.072250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.405 qpair failed and we were unable to recover it. 00:37:38.405 [2024-11-19 21:27:12.072390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.072426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.072540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.072576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.072708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.072744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.072885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.072928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.073073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.073121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.073226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.073262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.073398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.073434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.073589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.073643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.073824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.073878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.074018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.074055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.074216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.074265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.074416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.074453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.074591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.074627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.074757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.074792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.074927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.074962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.075077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.075113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.075240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.075289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.075433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.075470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.075634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.075669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.075833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.075869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.076051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.076119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.076259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.076295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.076454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.076513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.076664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.076699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.076809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.076844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.076982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.077023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.077231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.077280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.077454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.077491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.077595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.077630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.077794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.077839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.077943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.077978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.078092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.078138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.078278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.078314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.078452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.078505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.078655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.078713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.078869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.406 [2024-11-19 21:27:12.078904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.406 qpair failed and we were unable to recover it. 00:37:38.406 [2024-11-19 21:27:12.079046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.079089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.079244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.079280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.079428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.079473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.079600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.079635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.079769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.079805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.079912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.079948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.080097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.080165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.080306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.080346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.080454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.080489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.080628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.080664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.080800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.080835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.081008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.081045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.081188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.081237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.081396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.081445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.081592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.081650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.081909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.081969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.082159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.082196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.082305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.082348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.082525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.082582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.082700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.082739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.082886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.082925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.083132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.083181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.083313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.083363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.083512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.083549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.083690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.083725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.083875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.083911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.084075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.084126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.084286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.084327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.084487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.084522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.084656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.407 [2024-11-19 21:27:12.084691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.407 qpair failed and we were unable to recover it. 00:37:38.407 [2024-11-19 21:27:12.084839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.084875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.085009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.085061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.085218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.085267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.085436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.085478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.085630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.085669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.085877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.085916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.086031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.086079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.086271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.086305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.086481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.086581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.086759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.086798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.086969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.087004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.087147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.087183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.087290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.087336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.087499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.087549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.087805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.087865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.088049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.088123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.088234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.088268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.088472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.088511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.088767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.088826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.088977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.089015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.089193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.089243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.089394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.089431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.089568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.089604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.089748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.089784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.089922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.089957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.090085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.090125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.090224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.090259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.090416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.090465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.090659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.090715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.090926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.090979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.408 qpair failed and we were unable to recover it. 00:37:38.408 [2024-11-19 21:27:12.091127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.408 [2024-11-19 21:27:12.091163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.091289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.091348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.091520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.091556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.091700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.091736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.091850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.091887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.092019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.092054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.092179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.092220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.092332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.092369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.092480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.092515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.092676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.092711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.092858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.092896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.093034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.093087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.093237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.093275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.093480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.093516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.093693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.093752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.093893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.093932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.094046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.094119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.094233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.094268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.094438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.094473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.094638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.094674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.094784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.094819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.094947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.094982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.095099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.095137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.095239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.095274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.095405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.095440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.095602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.095637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.095762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.095812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.095971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.096021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.096151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.096188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.096297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.096343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.096476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.096511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.409 qpair failed and we were unable to recover it. 00:37:38.409 [2024-11-19 21:27:12.096642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.409 [2024-11-19 21:27:12.096677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.096838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.096872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.097008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.097059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.097229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.097267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.097453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.097518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.097682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.097718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.097884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.097935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.098089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.098129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.098264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.098299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.098432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.098470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.098667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.098766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.098951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.098989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.099169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.099205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.099314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.099370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.099651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.099710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.099937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.099988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.100178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.100214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.100346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.100395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.100537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.100574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.100740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.100775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.100914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.100949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.101088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.101133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.101305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.101365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.101526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.101592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.101837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.101905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.102124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.102160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.102342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.102404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.102559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.102625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.102895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.102953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.103119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.103156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.103291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.103327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.103490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.103530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.103703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.103743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.103923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.103963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.104127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.104163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.410 qpair failed and we were unable to recover it. 00:37:38.410 [2024-11-19 21:27:12.104324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.410 [2024-11-19 21:27:12.104387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.104552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.104605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.104759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.104825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.104972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.105009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.105199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.105235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.105386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.105424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.105675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.105710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.105847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.105882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.106013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.106047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.106196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.106246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.106431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.106481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.106608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.106645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.106793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.106828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.106968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.107003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.107134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.107169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.107297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.107351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.107538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.107578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.107714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.107750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.107939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.107978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.108103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.108165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.108334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.108384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.108536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.108588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.108799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.108870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.109024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.109062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.109228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.109266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.109421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.109459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.109626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.109684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.109847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.109886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.110038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.110078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.110225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.110260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.110432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.110466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.110568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.110625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.110764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.411 [2024-11-19 21:27:12.110818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.411 qpair failed and we were unable to recover it. 00:37:38.411 [2024-11-19 21:27:12.111009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.111057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.111334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.111399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.111559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.111602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.111752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.111790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.111993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.112032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.112226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.112262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.112383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.112437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.112698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.112755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.112892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.112930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.113083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.113137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.113266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.113300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.113494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.113557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.113731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.113791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.113965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.114005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.114152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.114192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.114299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.114340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.114503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.114539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.114698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.114737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.114904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.114938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.115145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.115180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.115289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.115331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.115437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.115471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.115627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.115665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.115806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.115844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.116017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.116055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.116226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.116260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.116421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.116459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.116591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.116639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.116859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.116898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.117043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.117091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.117204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.117238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.117399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.412 [2024-11-19 21:27:12.117434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.412 qpair failed and we were unable to recover it. 00:37:38.412 [2024-11-19 21:27:12.117565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.117630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.117802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.117839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.117978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.118013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.118146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.118181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.118316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.118367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.118578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.118616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.118792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.118830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.119001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.119038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.119191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.119241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.119411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.119459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.119623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.119678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.119812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.119852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.120021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.120056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.120214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.120264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.120430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.120471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.120720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.120797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.120957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.120996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.121143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.121180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.121328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.121377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.121533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.121588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.121716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.121756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.121925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.121960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.122096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.122140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.122248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.122283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.122451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.122487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.122594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.122629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.122731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.122766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.122931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.122965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.123106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.123141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.123273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.123345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.123534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.123576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.123787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.123858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.124008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.124047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.124220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.124257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.124444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.124495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.124696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.124756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.124897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.413 [2024-11-19 21:27:12.124932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.413 qpair failed and we were unable to recover it. 00:37:38.413 [2024-11-19 21:27:12.125066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.125110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.125235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.125288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.125452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.125506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.125638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.125673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.125783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.125818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.125982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.126016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.126156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.126192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.126300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.126334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.126467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.126502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.126638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.126673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.126812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.126848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.126987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.127025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.127190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.127260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.127389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.127430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.127693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.127753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.128064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.128144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.128272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.128309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.128459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.128512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.128667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.128719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.128855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.128891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.129010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.129047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.129219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.129258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.129373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.129411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.129549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.129587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.414 [2024-11-19 21:27:12.129728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.414 [2024-11-19 21:27:12.129789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.414 qpair failed and we were unable to recover it. 00:37:38.707 [2024-11-19 21:27:12.129907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.707 [2024-11-19 21:27:12.129951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.707 qpair failed and we were unable to recover it. 00:37:38.707 [2024-11-19 21:27:12.130112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.707 [2024-11-19 21:27:12.130149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.707 qpair failed and we were unable to recover it. 00:37:38.707 [2024-11-19 21:27:12.130274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.707 [2024-11-19 21:27:12.130313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.707 qpair failed and we were unable to recover it. 00:37:38.707 [2024-11-19 21:27:12.130491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.707 [2024-11-19 21:27:12.130544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.707 qpair failed and we were unable to recover it. 00:37:38.707 [2024-11-19 21:27:12.130667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.707 [2024-11-19 21:27:12.130723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.707 qpair failed and we were unable to recover it. 00:37:38.707 [2024-11-19 21:27:12.130860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.707 [2024-11-19 21:27:12.130896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.707 qpair failed and we were unable to recover it. 00:37:38.707 [2024-11-19 21:27:12.131074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.707 [2024-11-19 21:27:12.131145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.707 qpair failed and we were unable to recover it. 00:37:38.707 [2024-11-19 21:27:12.131264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.707 [2024-11-19 21:27:12.131301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.707 qpair failed and we were unable to recover it. 00:37:38.707 [2024-11-19 21:27:12.131437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.707 [2024-11-19 21:27:12.131476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.707 qpair failed and we were unable to recover it. 00:37:38.707 [2024-11-19 21:27:12.131623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.707 [2024-11-19 21:27:12.131658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.707 qpair failed and we were unable to recover it. 00:37:38.707 [2024-11-19 21:27:12.131764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.707 [2024-11-19 21:27:12.131799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.707 qpair failed and we were unable to recover it. 00:37:38.707 [2024-11-19 21:27:12.131934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.707 [2024-11-19 21:27:12.131968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.707 qpair failed and we were unable to recover it. 00:37:38.707 [2024-11-19 21:27:12.132092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.707 [2024-11-19 21:27:12.132129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.707 qpair failed and we were unable to recover it. 00:37:38.707 [2024-11-19 21:27:12.132284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.707 [2024-11-19 21:27:12.132346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.707 qpair failed and we were unable to recover it. 00:37:38.707 [2024-11-19 21:27:12.132542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.707 [2024-11-19 21:27:12.132594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.707 qpair failed and we were unable to recover it. 00:37:38.707 [2024-11-19 21:27:12.132708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.707 [2024-11-19 21:27:12.132742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.707 qpair failed and we were unable to recover it. 00:37:38.707 [2024-11-19 21:27:12.132858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.132893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.133030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.133065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.133215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.133251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.133386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.133421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.133540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.133575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.133758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.133796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.133926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.133980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.134133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.134188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.134416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.134474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.134719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.134758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.134876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.134915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.135082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.135118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.135225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.135260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.135450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.135508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.135612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.135647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.135814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.135895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.136106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.136154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.136261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.136315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.136466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.136506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.136622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.136661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.136806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.136844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.137027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.137063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.137180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.137214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.137350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.137385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.137539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.137584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.137703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.137743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.137894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.137933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.138094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.138140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.138284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.138319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.138541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.138591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.138814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.138852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.139034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.139081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.139218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.139252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.139387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.139422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.139620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.139686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.139866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.139904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.140054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.140097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.140228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.708 [2024-11-19 21:27:12.140263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.708 qpair failed and we were unable to recover it. 00:37:38.708 [2024-11-19 21:27:12.140430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.140464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.140682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.140719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.140897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.140936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.141098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.141134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.141265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.141299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.141397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.141432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.141622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.141660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.141781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.141833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.141979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.142017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.142186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.142222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.142358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.142393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.142554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.142592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.142713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.142752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.142904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.142943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.143087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.143141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.143278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.143312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.143439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.143473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.143639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.143694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.143840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.143879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.144061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.144103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.144209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.144244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.144347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.144400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.144581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.144616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.144800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.144838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.144955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.144993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.145156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.145192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.145308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.145348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.145486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.145521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.145675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.145714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.145832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.145870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.146012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.146050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.146193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.146228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.146362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.146397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.146503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.146537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.146692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.146727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.146828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.709 [2024-11-19 21:27:12.146879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.709 qpair failed and we were unable to recover it. 00:37:38.709 [2024-11-19 21:27:12.146983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.147021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.147191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.147227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.147363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.147398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.147509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.147544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.147742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.147777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.147883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.147917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.148031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.148066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.148234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.148269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.148465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.148499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.148662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.148697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.148847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.148881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.149041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.149081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.149195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.149230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.149382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.149418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.149574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.149609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.149767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.149806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.149977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.150015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.150181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.150216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.150334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.150368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.150506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.150541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.150677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.150712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.150818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.150854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.150993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.151027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.151174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.151209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.151356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.151407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.151557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.151592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.151728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.151779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.151950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.151985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.152128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.152163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.152279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.152330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.152495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.152534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.152697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.152731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.152908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.152947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.153135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.153169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.153294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.153329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.710 [2024-11-19 21:27:12.153427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.710 [2024-11-19 21:27:12.153461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.710 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.153589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.153626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.153803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.153837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.153938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.153973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.154127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.154165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.154321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.154355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.154516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.154554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.154665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.154703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.154864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.154899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.155065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.155124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.155256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.155309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.155444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.155478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.155588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.155623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.155785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.155836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.155970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.156005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.156169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.156208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.156362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.156412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.156516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.156551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.156677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.156712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.156899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.156938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.157096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.157131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.157272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.157307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.157427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.157461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.157588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.157622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.157753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.157787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.157924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.157959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.158061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.158110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.158254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.158289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.158447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.158485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.158631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.158665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.158815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.158868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.158994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.159033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3170661 Killed "${NVMF_APP[@]}" "$@" 00:37:38.711 [2024-11-19 21:27:12.159189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.159225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.159365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.159417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.159562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 21:27:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:37:38.711 [2024-11-19 21:27:12.159602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.159778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 [2024-11-19 21:27:12.159814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.711 qpair failed and we were unable to recover it. 00:37:38.711 [2024-11-19 21:27:12.159957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.711 21:27:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:38.712 [2024-11-19 21:27:12.159993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.160144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.160181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.712 21:27:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.160320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.160384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.712 21:27:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.160539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 21:27:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:38.712 [2024-11-19 21:27:12.160593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.160757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.160801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.160957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.160994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.161106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.161143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.161269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.161309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.161465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.161500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.161606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.161641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.161798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.161847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.161971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.162009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.162180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.162216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.162356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.162392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.162553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.162588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.162727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.162761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.162918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.162954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.163063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.163108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.163241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.163275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.163418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.163453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.163647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.163685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.163798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.163836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.163989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.164030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.164240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.164291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 [2024-11-19 21:27:12.164437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.164475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 21:27:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3171794 00:37:38.712 [2024-11-19 21:27:12.164596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.164655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 21:27:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:38.712 21:27:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3171794 00:37:38.712 [2024-11-19 21:27:12.164827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.164883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 21:27:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3171794 ']' 00:37:38.712 [2024-11-19 21:27:12.165040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.712 [2024-11-19 21:27:12.165096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.712 qpair failed and we were unable to recover it. 00:37:38.712 21:27:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:38.713 [2024-11-19 21:27:12.165224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.165262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 21:27:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:38.713 [2024-11-19 21:27:12.165405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.165441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.713 21:27:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:38.713 21:27:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:38.713 [2024-11-19 21:27:12.165652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.165718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 21:27:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:38.713 [2024-11-19 21:27:12.165903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.165942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.166134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.166184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.166346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.166395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.166533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.166592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.166749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.166807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.166919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.166955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.167098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.167141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.167308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.167346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.167451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.167487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.167627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.167664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.167831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.167867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.168013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.168063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.168250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.168304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.168498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.168556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.168749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.168810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.168977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.169013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.169143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.169198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.169380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.169435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.169577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.169615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.169822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.169882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.170036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.170079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.170211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.170261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.170402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.170467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.170647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.170716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.170907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.170967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.171130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.171180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.171319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.171360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.171507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.171547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.171805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.171865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.171987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.172040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.172198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.172235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.172383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.172442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.713 qpair failed and we were unable to recover it. 00:37:38.713 [2024-11-19 21:27:12.172557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.713 [2024-11-19 21:27:12.172595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.172833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.172895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.173063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.173144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.173310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.173377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.173596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.173656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.173800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.173857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.173983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.174023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.174187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.174223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.174391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.174447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.174633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.174687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.174839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.174877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.175050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.175097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.175224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.175259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.175401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.175435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.175587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.175625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.175766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.175805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.175981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.176020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.176156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.176191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.176323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.176393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.176560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.176597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.176781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.176835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.176969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.177021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.177170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.177210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.177342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.177396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.177544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.177583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.177707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.177760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.177896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.177935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.178117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.178173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.178289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.178325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.178420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.178455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.178580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.178618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.178746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.178798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.178968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.179022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.179165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.179201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.179331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.179380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.179511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.179552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.179724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.179778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.714 [2024-11-19 21:27:12.179885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.714 [2024-11-19 21:27:12.179932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.714 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.180100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.180135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.180248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.180284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.180426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.180461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.180598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.180633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.180809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.180844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.180988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.181024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.181149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.181186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.181296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.181351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.181518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.181579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.181744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.181799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.181915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.181967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.182123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.182160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.182291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.182348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.182463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.182499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.182621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.182676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.182875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.182949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.183136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.183186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.183308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.183344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.183495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.183553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.183745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.183804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.183926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.183964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.184122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.184159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.184295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.184351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.184464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.184499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.184627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.184687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.184840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.184889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.185009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.185048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.185182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.185219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.185338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.185372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.185521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.185555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.185662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.185696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.185836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.185870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.185979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.186015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.186157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.186207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.186351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.186408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.186547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.186606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.186766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.186805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.715 [2024-11-19 21:27:12.186932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.715 [2024-11-19 21:27:12.186967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.715 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.187086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.187122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.187257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.187293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.187442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.187477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.187590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.187625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.187760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.187796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.187926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.187959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.188064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.188103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.188244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.188283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.188452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.188506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.188664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.188705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.188837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.188874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.189016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.189062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.189203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.189259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.189404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.189458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.189575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.189630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.189803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.189858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.189996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.190030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.190156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.190190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.190315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.190361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.190545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.190582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.190767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.190805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.190970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.191005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.191170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.191209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.191359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.191403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.191597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.191631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.191767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.191815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.191976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.192016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.192157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.192192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.192301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.192335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.192471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.192505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.192619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.192653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.192791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.192825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.716 [2024-11-19 21:27:12.192927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.716 [2024-11-19 21:27:12.192961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.716 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.193118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.193166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.193300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.193347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.193523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.193565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.193750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.193790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.193913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.193948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.194130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.194199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.194342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.194391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.194569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.194613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.194735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.194773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.195700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.195743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.195924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.195962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.196095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.196149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.196284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.196322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.196455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.196494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.196629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.196667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.196794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.196832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.196981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.197019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.197172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.197206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.197319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.197363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.197490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.197530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.197708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.197745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.197942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.198007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.198149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.198187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.198295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.198330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.198498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.198552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.198712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.198770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.198902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.198937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.199047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.199095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.199258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.199311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.199505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.199544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.199692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.199747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.199860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.199898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.200044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.200106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.200238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.200281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.200433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.200472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.200609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.200647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.200764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.717 [2024-11-19 21:27:12.200801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.717 qpair failed and we were unable to recover it. 00:37:38.717 [2024-11-19 21:27:12.200940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.200977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.201862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.201913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.202079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.202122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.202237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.202271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.203348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.203408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.203590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.203629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.204474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.204519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.204700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.204739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.205597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.205639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.205816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.205855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.206047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.206097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.206221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.206255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.206371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.206405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.206542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.206576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.206760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.206807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.206978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.207012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.207165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.207200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.207317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.207351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.207472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.207522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.207664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.207700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.207893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.207929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.208033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.208077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.208206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.208240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.208373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.208431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.208620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.208673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.208879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.208932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.209114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.209149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.209260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.209294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.209411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.209445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.209580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.209618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.209796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.209832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.209970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.210012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.210184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.210232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.210350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.210387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.210486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.210540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.210676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.210713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.210833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.210875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.211027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.718 [2024-11-19 21:27:12.211064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.718 qpair failed and we were unable to recover it. 00:37:38.718 [2024-11-19 21:27:12.211188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.211225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.211331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.211385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.211542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.211581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.211743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.211779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.211926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.211964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.212124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.212172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.212287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.212323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.212507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.212566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.212701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.212738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.212885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.212922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.213078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.213125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.213241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.213297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.213466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.213504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.213690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.213728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.213860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.213897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.214005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.214043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.214185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.214220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.214364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.214420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.214617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.214674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.214802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.214855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.214995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.215029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.215174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.215228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.215383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.215422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.215566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.215614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.215786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.215824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.215956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.215991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.216144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.216192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.216322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.216370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.216482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.216519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.216671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.216710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.216841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.216875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.217036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.217104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.217242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.217298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.217474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.217545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.217785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.217824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.217972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.218009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.218160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.218195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.719 [2024-11-19 21:27:12.218320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.719 [2024-11-19 21:27:12.218367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.719 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.218523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.218565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.218736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.218792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.218937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.218971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.219125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.219160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.219268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.219303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.219435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.219493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.219649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.219702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.219814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.219854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.219984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.220019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.220164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.220199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.220306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.220341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.221116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.221157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.221279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.221315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.222086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.222125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.222253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.222289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.223025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.223081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.223211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.223247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.223362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.223398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.223546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.223581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.223744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.223779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.223911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.223945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.224105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.224155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.224292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.224340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.224461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.224497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.224643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.224678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.224821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.224856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.224991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.225026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.225142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.225177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.225277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.225312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.225489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.225523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.225654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.225688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.225818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.225851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.225948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.225982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.226156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.226194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.226310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.226348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.226458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.226496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.226684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.226749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.720 qpair failed and we were unable to recover it. 00:37:38.720 [2024-11-19 21:27:12.226887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.720 [2024-11-19 21:27:12.226936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.227088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.227125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.227280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.227320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.227496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.227542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.227713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.227752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.227890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.227929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.228084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.228138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.228246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.228300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.228461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.228505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.228715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.228772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.228885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.228920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.229060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.229102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.229221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.229277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.229417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.229471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.229573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.229618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.229761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.229797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.229909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.229943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.230086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.230121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.230226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.230260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.230436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.230489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.230676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.230716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.230872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.230907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.231065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.231123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.231272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.231324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.231475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.231515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.231665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.231700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.231834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.231868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.232666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.232705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.232912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.232948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.233507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.233551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.233699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.233749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.233890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.233938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.234093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.234143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.721 [2024-11-19 21:27:12.234277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.721 [2024-11-19 21:27:12.234312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.721 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.234483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.234537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.234675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.234712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.234859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.234896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.235039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.235115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.235221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.235254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.235418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.235456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.235599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.235638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.235779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.235816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.235988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.236022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.236138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.236172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.236341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.236375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.236601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.236638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.236787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.236825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.236963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.237000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.237140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.237174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.237292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.237325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.237462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.237496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.237648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.237697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.237870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.237908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.238050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.238121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.238253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.238302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.238473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.238534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.238722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.238776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.238925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.238960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.239077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.239113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.239249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.239283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.239446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.239480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.239625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.239680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.239888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.239927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.240062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.240104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.240260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.240299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.240453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.240491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.240622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.240668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.240853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.240913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.241052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.241099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.241228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.241281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.241433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.722 [2024-11-19 21:27:12.241491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.722 qpair failed and we were unable to recover it. 00:37:38.722 [2024-11-19 21:27:12.241600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.241634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.241770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.241805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.241924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.241962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.242149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.242196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.242308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.242345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.242541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.242576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.242704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.242738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.242873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.242939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.243066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.243121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.243256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.243293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.243491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.243587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.243883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.243941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.244099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.244134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.244304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.244341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.244613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.244670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.244974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.245017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.245205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.245240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.245406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.245464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.245632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.245691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.245947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.246006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.246172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.246219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.246359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.246400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.246587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.246640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.246790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.246846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.247000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.247048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.247237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.247285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.247483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.247520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.247739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.247804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.247980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.248018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.248183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.248231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.248388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.248428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.248550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.248589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.248733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.248772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.248997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.249035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.249177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.249212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.249330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.249401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.249603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.249643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.249793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.723 [2024-11-19 21:27:12.249832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.723 qpair failed and we were unable to recover it. 00:37:38.723 [2024-11-19 21:27:12.249968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.250007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.250188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.250228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.250377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.250431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.250649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.250706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.250888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.250947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.251067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.251128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.251245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.251279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.251451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.251485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.251700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.251768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.251928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.251978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.252172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.252207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.252321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.252355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.252501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.252535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.252722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.252760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.252884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.252925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.253062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.253136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.253295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.253343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.253544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.253608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.253787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.253826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.253968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.254003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.254171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.254205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.254340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.254394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.254537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.254575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.254709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.254761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.254979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.255018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.255184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.255218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.255346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.255387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.255523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.255572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.255698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.255736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.255869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.255920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.256063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.256128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.256288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.256322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.256496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.256533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.256660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.256694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.256884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.256922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.257083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.257117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.257218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.257252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.257438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.724 [2024-11-19 21:27:12.257476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.724 qpair failed and we were unable to recover it. 00:37:38.724 [2024-11-19 21:27:12.257621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.257675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.257819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.257856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.258004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.258042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.258195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.258233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.258344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.258381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.258524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.258561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.258735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.258772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.258962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.258999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.259148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.259183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.259318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.259353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.259487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.259540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.259645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.259683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.259848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.259886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.259999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.260036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.260189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.260237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.260426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.260465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.260615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.260653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.260776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.260816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.260988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.261036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.261173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.261221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.261330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.261385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.261590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.261634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.261887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.261950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.262139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.262173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.262253] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:37:38.725 [2024-11-19 21:27:12.262336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.262391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 [2024-11-19 21:27:12.262402] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.262567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.262635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.262765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.262798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.262935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.262971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.263136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.263170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.263280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.263314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.263492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.263529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.263729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.263767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.263888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.263925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.264077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.264132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.264234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.264268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.264500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.264553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.725 [2024-11-19 21:27:12.264686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.725 [2024-11-19 21:27:12.264739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.725 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.264931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.264970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.265138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.265174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.265311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.265346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.265485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.265524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.265665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.265703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.265876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.265928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.266095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.266144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.266287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.266324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.266441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.266478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.266720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.266779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.266943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.266979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.267128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.267164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.267328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.267382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.267508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.267565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.267741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.267800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.267973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.268010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.268154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.268188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.268292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.268326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.268490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.268533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.268715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.268752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.268904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.268945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.269167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.269202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.269337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.269381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.269509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.269549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.269731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.269770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.269916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.269954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.270143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.270191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.270314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.270379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.270572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.270674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.270824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.726 [2024-11-19 21:27:12.270894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.726 qpair failed and we were unable to recover it. 00:37:38.726 [2024-11-19 21:27:12.271024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.271092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.271198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.271232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.271377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.271413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.271558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.271604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.271791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.271858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.272029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.272067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.272193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.272229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.272391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.272443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.272596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.272652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.272878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.272926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.273064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.273109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.273219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.273253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.273407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.273446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.273566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.273605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.273747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.273785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.273915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.273951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.274085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.274133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.274291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.274366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.274490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.274531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.274705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.274744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.274883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.274918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.275028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.275066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.275201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.275240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.275385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.275423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.275662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.275701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.275967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.276024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.276282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.276316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.276553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.276591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.276788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.276854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.276996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.277031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.277217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.277252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.277390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.277425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.277602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.277640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.277820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.277878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.278037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.278078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.278194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.278229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.278381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.278419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.727 qpair failed and we were unable to recover it. 00:37:38.727 [2024-11-19 21:27:12.278540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.727 [2024-11-19 21:27:12.278603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.278800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.278864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.279010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.279060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.279266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.279314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.279479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.279527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.279702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.279764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.279924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.279986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.280148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.280182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.280306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.280353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.280550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.280611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.280857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.280916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.281082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.281133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.281248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.281283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.281412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.281460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.281573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.281609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.281860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.281917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.282051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.282092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.282223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.282277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.282439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.282506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.282743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.282797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.282976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.283015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.283265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.283301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.283429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.283466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.283640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.283679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.283901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.283939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.284103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.284138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.284287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.284335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.284535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.284575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.284734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.284772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.284887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.284926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.285079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.285127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.285251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.285305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.285468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.285542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.285691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.285763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.285910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.285946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.286102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.286150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.286320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.286368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.286618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.728 [2024-11-19 21:27:12.286673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.728 qpair failed and we were unable to recover it. 00:37:38.728 [2024-11-19 21:27:12.286847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.286916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.287030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.287065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.287217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.287251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.287411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.287451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.287576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.287614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.287810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.287865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.288022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.288056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.288178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.288212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.288359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.288397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.288567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.288605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.288785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.288823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.288946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.288984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.289141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.289176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.289301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.289365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.289530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.289571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.289704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.289758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.289905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.289944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.290132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.290168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.290282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.290316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.290452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.290487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.290655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.290707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.290941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.290977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.291113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.291148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.291255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.291289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.291396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.291430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.291572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.291625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.291785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.291819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.291986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.292020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.292174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.292209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.292372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.292411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.292534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.292586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.292722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.292756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.292917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.292953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.293134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.293174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.293314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.293348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.293458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.293493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.293645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.293682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.293786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.729 [2024-11-19 21:27:12.293823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.729 qpair failed and we were unable to recover it. 00:37:38.729 [2024-11-19 21:27:12.293984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.294037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.294235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.294283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.294505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.294562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.294709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.294784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.294901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.294952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.295065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.295107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.295243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.295276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.295463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.295501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.295624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.295681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.295826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.295863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.296009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.296047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.296203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.296250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.296431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.296486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.296640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.296691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.296811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.296864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.297017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.297065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.297234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.297281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.297487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.297527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.297711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.297772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.298008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.298066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.298238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.298274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.298535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.298604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.298864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.298926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.299066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.299108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.299241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.299276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.299446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.299499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.299684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.299738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.299846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.299881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.300042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.300113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.300245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.300315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.300566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.300620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.730 [2024-11-19 21:27:12.300848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.730 [2024-11-19 21:27:12.300904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.730 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.301091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.301145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.301260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.301296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.301430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.301468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.301668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.301727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.301882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.301920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.302118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.302154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.302308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.302356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.302484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.302523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.302695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.302733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.302902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.302959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.303103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.303138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.303252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.303286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.303422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.303460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.303606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.303643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.303785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.303822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.303954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.303989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.304145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.304193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.304356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.304402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.304577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.304617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.304753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.304792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.304969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.305019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.305161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.305196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.305332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.305391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.305650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.305708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.305893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.305965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.306112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.306146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.306256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.306291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.306427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.306477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.306624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.306661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.306782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.306819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.307017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.307090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.307238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.307274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.307450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.307484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.307614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.307672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.307792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.307857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.307992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.308028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.308210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.731 [2024-11-19 21:27:12.308247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.731 qpair failed and we were unable to recover it. 00:37:38.731 [2024-11-19 21:27:12.308368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.308416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.308560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.308595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.308695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.308729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.308835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.308870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.309022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.309080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.309232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.309268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.309395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.309429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.309562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.309600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.309726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.309759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.309951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.309984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.310115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.310149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.310271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.310309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.310528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.310586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.310716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.310774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.310929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.310963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.311082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.311118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.311284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.311319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.311529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.311597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.311788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.311823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.311955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.311990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.312158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.312212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.312312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.312346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.312496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.312529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.312658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.312692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.312829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.312865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.312996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.313029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.313142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.313177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.313307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.313340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.313469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.313502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.313607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.313641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.313792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.313847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.313976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.314023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.314170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.314226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.314374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.314421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.314571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.314611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.314758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.732 [2024-11-19 21:27:12.314796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.732 qpair failed and we were unable to recover it. 00:37:38.732 [2024-11-19 21:27:12.314957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.314996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.315161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.315197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.315347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.315399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.315499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.315533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.315730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.315787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.315928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.315962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.316101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.316136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.316244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.316278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.316415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.316449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.316553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.316587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.316748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.316781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.316921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.316954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.317112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.317147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.317321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.317391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.317517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.317557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.317758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.317797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.317919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.317958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.318116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.318151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.318282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.318316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.318451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.318506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.318677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.318715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.318896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.318934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.319109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.319144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.319249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.319282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.319436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.319474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.319690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.319729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.319841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.319878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.320020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.320076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.320223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.320257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.320407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.320445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.320631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.320669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.320818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.320855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.321001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.321039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.321202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.321236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.321362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.321419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.321585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.321641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.321795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.321849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.321978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.733 [2024-11-19 21:27:12.322018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.733 qpair failed and we were unable to recover it. 00:37:38.733 [2024-11-19 21:27:12.322185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.322237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.322370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.322404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.322541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.322576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.322739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.322773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.322906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.322945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.323081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.323115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.323213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.323247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.323379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.323413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.323565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.323605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.323776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.323828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.323939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.323974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.324106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.324141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.324298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.324349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.324477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.324515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.324673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.324708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.324866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.324900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.325028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.325062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.325172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.325207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.325309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.325343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.325458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.325491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.325599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.325635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.325771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.325806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.325945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.325979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.326134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.326173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.326346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.326383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.326490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.326527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.326687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.326725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.326865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.326918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.327043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.327083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.327208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.327246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.327363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.327400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.327575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.327612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.327787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.327842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.327999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.328033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.328148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.328183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.328339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.328394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.734 [2024-11-19 21:27:12.328547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.734 [2024-11-19 21:27:12.328599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.734 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.328727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.328781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.328917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.328951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.329129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.329172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.329290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.329329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.329473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.329510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.329623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.329661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.329808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.329846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.329998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.330033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.330196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.330247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.330398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.330455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.330606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.330657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.330834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.330869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.331002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.331036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.331177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.331212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.331330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.331364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.331499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.331533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.331650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.331684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.331820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.331854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.331965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.331999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.332155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.332189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.332300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.332334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.332468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.332501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.332607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.332645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.332799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.332850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.332986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.333020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.333169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.333204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.333361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.333413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.333559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.333612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.333765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.333804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.333967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.334021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.334177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.334215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.735 qpair failed and we were unable to recover it. 00:37:38.735 [2024-11-19 21:27:12.334391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.735 [2024-11-19 21:27:12.334425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.334538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.334572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.334706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.334741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.334881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.334917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.335094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.335143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.335316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.335369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.335488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.335526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.335696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.335734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.335875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.335913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.336090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.336144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.336304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.336338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.336464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.336507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.336656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.336694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.336837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.336875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.337000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.337035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.337202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.337250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.337378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.337426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.337556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.337612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.337768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.337820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.337955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.337990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.338141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.338194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.338356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.338391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.338522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.338555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.338687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.338721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.338849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.338882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.339064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.339119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.339261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.339298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.339413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.339448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.339567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.339620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.339778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.339830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.339965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.340000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.340207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.340261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.340420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.340461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.340681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.340749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.340884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.340919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.341050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.341097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.341221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.341290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.341483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.736 [2024-11-19 21:27:12.341536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.736 qpair failed and we were unable to recover it. 00:37:38.736 [2024-11-19 21:27:12.341722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.341775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.341911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.341956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.342089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.342125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.342228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.342263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.342394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.342435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.342582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.342632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.342783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.342821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.342969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.343003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.343139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.343187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.343302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.343338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.343541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.343580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.343727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.343765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.343917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.343954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.344110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.344153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.344316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.344368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.344513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.344551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.344811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.344870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.345047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.345111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.345267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.345302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.345493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.345544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.345693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.345731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.345897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.345931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.346099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.346133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.346320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.346387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.346550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.346590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.346742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.346781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.346954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.346992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.347168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.347203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.347353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.347387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.347536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.347574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.347713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.347751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.347898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.347936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.348049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.348098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.348218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.348251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.348412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.348479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.348668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.348707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.348894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.348948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.349059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.737 [2024-11-19 21:27:12.349102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.737 qpair failed and we were unable to recover it. 00:37:38.737 [2024-11-19 21:27:12.349258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.349312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.349465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.349519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.349664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.349699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.349837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.349871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.349982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.350016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.350153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.350188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.350319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.350353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.350505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.350542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.350718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.350756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.350926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.350964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.351122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.351156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.351315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.351352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.351465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.351503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.351674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.351711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.351905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.351971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.352083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.352125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.352237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.352272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.352420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.352472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.352636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.352718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.352880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.352914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.353030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.353065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.353225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.353273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.353412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.353449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.353604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.353642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.353788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.353827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.353966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.354001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.354160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.354196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.354307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.354357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.354536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.354588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.354810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.354853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.355003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.355042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.355210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.355245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.355350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.355384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.355496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.355530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.355688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.355722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.355886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.355921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.356027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.356061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.738 [2024-11-19 21:27:12.356176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.738 [2024-11-19 21:27:12.356210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.738 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.356340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.356388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.356557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.356604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.356783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.356822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.356953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.356987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.357129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.357168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.357278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.357314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.357453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.357492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.357655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.357689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.357829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.357866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.357981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.358017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.358156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.358191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.358327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.358362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.358469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.358503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.358610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.358646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.358782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.358816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.358920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.358955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.359063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.359103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.359240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.359279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.359388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.359422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.359556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.359592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.359694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.359729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.359890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.359924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.360032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.360066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.360228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.360276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.360417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.360465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.360633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.360667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.360775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.360808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.360903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.360937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.361077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.361112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.361250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.361285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.361405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.361444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.361563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.361600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.361706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.361741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.361871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.361906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.362010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.362044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.362177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.362212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.362323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.739 [2024-11-19 21:27:12.362358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.739 qpair failed and we were unable to recover it. 00:37:38.739 [2024-11-19 21:27:12.362510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.362544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.362647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.362682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.362848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.362882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.362988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.363022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.363140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.363176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.363288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.363323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.363436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.363470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.363614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.363650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.363758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.363795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.363927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.363961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.364079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.364121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.364256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.364291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.364422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.364470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.364612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.364648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.364786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.364822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.364985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.365020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.365146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.365182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.365282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.365317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.365487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.365522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.365654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.365688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.365793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.365833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.365972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.366007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.366140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.366179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.366284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.366319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.366492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.366526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.366635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.366669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.366801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.366836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.366974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.367009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.367170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.367219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.367364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.367400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.367533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.367568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.367674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.367709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.367824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.740 [2024-11-19 21:27:12.367857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.740 qpair failed and we were unable to recover it. 00:37:38.740 [2024-11-19 21:27:12.368011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.368063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.368206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.368254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.368418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.368465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.368604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.368640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.368767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.368802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.368939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.368973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.369103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.369138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.369293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.369341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.369482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.369518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.369658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.369694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.369824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.369859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.370017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.370066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.370237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.370285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.370435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.370470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.370602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.370636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.370771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.370805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.370931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.370965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.371073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.371108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.371214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.371248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.371412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.371449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.371566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.371600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.371767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.371801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.371894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.371928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.372033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.372078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.372231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.372271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.372410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.372445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.372553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.372587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.372694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.372733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.372839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.372872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.373013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.373047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.373164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.373198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.373312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.373349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.373513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.373548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.373683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.373717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.373853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.373887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.374033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.374090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.374210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.374246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.741 [2024-11-19 21:27:12.374381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.741 [2024-11-19 21:27:12.374415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.741 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.374546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.374580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.374708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.374742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.374878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.374912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.375029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.375064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.375192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.375241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.375383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.375419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.375527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.375561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.375719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.375754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.375883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.375917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.376033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.376090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.376239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.376275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.376425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.376463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.376600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.376635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.376769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.376803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.376912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.376947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.377105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.377141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.377259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.377298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.377432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.377466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.377573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.377607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.377717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.377752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.377852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.377886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.378020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.378056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.378175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.378222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.378330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.378365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.378499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.378533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.378634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.378668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.378814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.378851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.378954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.378989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.379133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.379169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.379284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.379324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.379432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.379468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.379627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.379661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.379786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.379819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.379919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.379953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.380130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.380178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.380289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.380324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.380427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.380461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.742 qpair failed and we were unable to recover it. 00:37:38.742 [2024-11-19 21:27:12.380596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.742 [2024-11-19 21:27:12.380630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.380762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.380796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.380916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.380966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.381106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.381141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.381260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.381297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.381411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.381446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.381594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.381627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.381730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.381764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.381901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.381937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.382080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.382115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.382249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.382283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.382418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.382452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.382591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.382625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.382780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.382815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.382954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.382989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.383125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.383159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.383269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.383303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.383437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.383471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.383613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.383647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.383760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.383794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.383937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.383972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.384091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.384147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.384268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.384305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.384446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.384481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.384618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.384652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.384754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.384788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.384930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.384964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.385105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.385141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.385248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.385282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.385391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.385425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.385536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.385569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.385699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.385734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.385840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.385881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.386057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.386117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.386243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.386291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.386458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.386494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.386628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.386663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.386811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.743 [2024-11-19 21:27:12.386848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.743 qpair failed and we were unable to recover it. 00:37:38.743 [2024-11-19 21:27:12.386992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.387029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.387157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.387205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.387349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.387387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.387501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.387542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.387685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.387719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.387824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.387859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.387982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.388029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.388168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.388216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.388383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.388432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.388578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.388615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.388781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.388816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.388921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.388955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.389089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.389137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.389299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.389347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.389471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.389508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.389626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.389662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.389795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.389829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.389953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.389987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.390115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.390164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.390337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.390373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.390510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.390545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.390753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.390790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.390952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.390999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.391111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.391147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.391281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.391315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.391453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.391487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.391598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.391633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.391771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.391807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.391914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.391948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.392099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.392147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.392255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.392291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.392400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.392434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.392569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.744 [2024-11-19 21:27:12.392604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.744 qpair failed and we were unable to recover it. 00:37:38.744 [2024-11-19 21:27:12.392744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.392780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.392916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.392950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.393119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.393154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.393287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.393322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.393448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.393496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.393617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.393654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.393813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.393848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.393985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.394019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.394155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.394204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.394346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.394382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.394529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.394564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.394728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.394763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.394928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.394965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.395082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.395118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.395223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.395258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.395402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.395436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.395538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.395573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.395679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.395713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.395880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.395916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.396025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.396060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.396245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.396293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.396405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.396441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.396554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.396587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.396691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.396726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.396860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.396895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.397000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.397034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.397185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.397220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.397353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.397396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.397528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.397567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.397681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.397716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.397843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.397878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.398001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.398050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.398204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.398238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.398404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.398438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.398546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.398580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.398681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.398715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.398852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.398887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.399044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.745 [2024-11-19 21:27:12.399103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.745 qpair failed and we were unable to recover it. 00:37:38.745 [2024-11-19 21:27:12.399233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.399282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.399453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.399490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.399601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.399636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.399775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.399810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.399957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.399992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.400139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.400174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.400322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.400371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.400518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.400554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.400658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.400692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.400854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.400888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.401089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.401138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.401321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.401369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.401492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.401528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.401648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.401684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.401820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.401856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.401963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.401998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.402110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.402145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.402305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.402355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.402463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.402498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.402635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.402671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.402805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.402840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.402975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.403010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.403154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.403191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.403321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.403369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.403488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.403524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.403625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.403659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.403819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.403853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.404002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.404036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.404176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.404213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.404318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.404352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.404535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.404589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.404739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.404775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.404941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.404978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.405143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.405179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.405352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.405400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.405517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.405553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.405682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.405717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.746 [2024-11-19 21:27:12.405853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.746 [2024-11-19 21:27:12.405888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.746 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.405998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.406032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.406154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.406203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.406324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.406359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.406519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.406553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.406658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.406693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.406820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.406854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.406969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.407005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.407146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.407182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.407317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.407356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.407483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.407531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.407674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.407710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.407819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.407855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.407981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.408016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.408148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.408184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.408295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.408330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.408459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.408493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.408592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.408626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.408755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.408789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.408933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.408971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.409128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.409165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.409306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.409342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.409452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.409487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.409615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.409657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.409792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.409827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.409940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.409976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.410098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.410134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.410235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.410270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.410442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.410476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.410607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.410642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.410777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.410813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.410974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.411009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.411168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.411216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.411358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.411399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.411503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.411537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.411666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.411700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.411816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.411849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.411977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.412024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.747 qpair failed and we were unable to recover it. 00:37:38.747 [2024-11-19 21:27:12.412175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.747 [2024-11-19 21:27:12.412211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.412329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.412363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.412483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.412519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.412704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.412739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.412844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.412879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.413015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.413051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.413201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.413236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.413388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.413436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.413570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.413617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.413770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.413808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.413916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.413951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.414086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.414122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.414286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.414321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.414454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.414488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.414645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.414679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.414817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.414853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.414965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.415000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.415109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.415145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.415305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.415340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.415472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.415507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.415645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.415679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.415785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.415820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.415962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.415997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.416131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.416166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.416293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.416327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.416465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.416499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.416711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.416746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.416885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.416920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.417059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.417099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.417234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.417270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.417406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.417441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.417552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.417588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.417719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.417754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.417855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.417889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.417998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.418032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.418177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.418216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.418353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.418388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.418494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.748 [2024-11-19 21:27:12.418528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.748 qpair failed and we were unable to recover it. 00:37:38.748 [2024-11-19 21:27:12.418654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.418689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.418796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.418842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.418950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.418983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.419087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.419122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.419238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.419272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.419403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.419437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 [2024-11-19 21:27:12.419427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.419550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.419584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.419749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.419782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.419886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.419922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.420031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.420066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.420187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.420226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.420356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.420390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.420493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.420526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.420664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.420698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.420797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.420831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.420941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.420976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.421104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.421140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.421242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.421277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.421408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.421457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.421569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.421605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.421717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.421752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.421851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.421885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.421998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.422032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.422183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.422218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.422325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.422360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.422488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.422522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.422656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.422691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.422803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.422836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.422966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.423000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.423138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.423172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.423331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.423366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.423500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.423534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.749 [2024-11-19 21:27:12.423663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.749 [2024-11-19 21:27:12.423697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.749 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.423831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.423865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.423972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.424007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.424152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.424187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.424296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.424331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.424504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.424538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.424642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.424676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.424812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.424846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.424992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.425025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.425184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.425219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.425348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.425382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.425549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.425582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.425683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.425716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.425870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.425904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.426036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.426076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.426195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.426229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.426338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.426372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.426469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.426502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.426634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.426672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.426776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.426810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.426966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.427013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.427147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.427195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.427339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.427376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.427479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.427514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.427623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.427659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.427819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.427854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.427970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.428006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.428148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.428183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.428289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.428325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.428435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.428468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.428576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.428610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.428717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.428751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.428916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.428951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.429062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.429107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.429242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.429277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.429410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.429444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.429541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.429575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.429710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.429745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.750 qpair failed and we were unable to recover it. 00:37:38.750 [2024-11-19 21:27:12.429846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.750 [2024-11-19 21:27:12.429880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.429988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.430022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.430194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.430229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.430329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.430363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.430475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.430510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.430613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.430647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.430780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.430814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.430925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.430959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.431065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.431107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.431244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.431278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.431400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.431437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.431595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.431630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.431765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.431799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.431906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.431940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.432040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.432079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.432219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.432253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.432359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.432393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.432561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.432596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.432728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.432763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.432897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.432931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.433062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.433106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.433222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.433256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.433358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.433391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.433503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.433536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.433670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.433704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.433834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.433868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.434002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.434035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.434176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.434210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.434362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.434410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.434520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.434556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.434665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.434700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.434832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.434867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.435007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.435041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.435209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.435244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.435388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.435424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.435540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.435574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.435682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.435716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.435831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.751 [2024-11-19 21:27:12.435864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.751 qpair failed and we were unable to recover it. 00:37:38.751 [2024-11-19 21:27:12.435998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.436043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.436198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.436233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.436363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.436407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.436541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.436575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.436679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.436714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.436867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.436901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.437034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.437084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.437204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.437238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.437413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.437462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.437590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.437628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.437768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.437803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.437952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.437988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.438110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.438146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.438278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.438327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.438467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.438515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.438678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.438715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.438854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.438889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.438990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.439024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.439169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.439203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.439309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.439343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.439498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.439532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.439694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.439727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.439837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.439875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.439979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.440012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.440142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.440175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.440302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.440337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.440486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.440519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.440626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.440660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.440788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.440822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.440928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.441002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.441156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.441190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.441331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.441365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.441483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.441517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.441621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.441655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.441769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.441803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.441902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.441936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.442088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.752 [2024-11-19 21:27:12.442133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.752 qpair failed and we were unable to recover it. 00:37:38.752 [2024-11-19 21:27:12.442266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.442301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.442411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.442445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.442544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.442578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.442742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.442776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.442925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.442959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.443091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.443135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.443251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.443285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.443464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.443498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.443636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.443669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.443797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.443831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.443943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.443977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.444116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.444153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.444311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.444360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.444632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.444668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.444811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.444848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.444952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.444987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.445119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.445157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.445301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.445336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.445505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.445539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.445646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.445680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.445815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.445849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.445980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.446014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.446167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.446203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.446341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.446382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.446635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.446670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.446819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.446868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.447011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.447046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.447206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.447240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.447347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.447391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.447530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.447565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.447673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.447707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.447840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.447875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.447993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.448030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.753 qpair failed and we were unable to recover it. 00:37:38.753 [2024-11-19 21:27:12.448177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.753 [2024-11-19 21:27:12.448226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.448377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.448413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.448574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.448623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.448765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.448801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.448938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.448973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.449106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.449145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.449263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.449297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.449430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.449465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.449629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.449664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.449799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.449833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.449941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.449975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.450087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.450131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.450266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.450300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.450419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.450462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.450598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.450634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.450797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.450831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.450957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.451004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.451162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.451199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.451360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.451408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.451547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.451583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.451720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.451755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.451866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.451899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.452042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.452082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.452200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.452234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.452342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.452388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.452537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.452571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.452712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.452747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.754 qpair failed and we were unable to recover it. 00:37:38.754 [2024-11-19 21:27:12.452849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.754 [2024-11-19 21:27:12.452884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.453015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.453050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.453179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.453214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.453324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.453361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.453528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.453563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.453664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.453704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.453869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.453903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.454005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.454039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.454152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.454187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.454298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.454332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.454476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.454510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.454618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.454652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.454764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.454798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.454959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.454994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.455099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.455141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.455275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.455310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.455452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.455485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.455618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.455652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.455753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.455788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.455899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.455934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.456088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.456148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.456301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.456349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.456469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.456506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.456644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.456679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.456783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.456817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.456953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.456987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.457103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.457142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.457243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.457276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.457439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.457473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.457575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.457609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.457726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.755 [2024-11-19 21:27:12.457761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.755 qpair failed and we were unable to recover it. 00:37:38.755 [2024-11-19 21:27:12.457866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.457900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.458004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.458039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.458208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.458256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.458364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.458408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.458517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.458551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.458675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.458708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.458885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.458920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.459023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.459057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.459189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.459224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.459388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.459425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.459532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.459566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.459688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.459722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.459883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.459918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.460023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.460057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.460196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.460248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.460372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.460409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.460521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.460556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.460655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.460690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.460800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.460834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.460971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.461005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.461112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.461147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.461249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.461283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.461449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.461490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.461628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.461663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.461771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.461805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.461935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.461969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.462084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.462119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.462230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.462264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.462398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.462432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.462534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.462568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.462698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.756 [2024-11-19 21:27:12.462732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.756 qpair failed and we were unable to recover it. 00:37:38.756 [2024-11-19 21:27:12.462836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.757 [2024-11-19 21:27:12.462870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.757 qpair failed and we were unable to recover it. 00:37:38.757 [2024-11-19 21:27:12.463032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.757 [2024-11-19 21:27:12.463067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.757 qpair failed and we were unable to recover it. 00:37:38.757 [2024-11-19 21:27:12.463227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.757 [2024-11-19 21:27:12.463275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:38.757 qpair failed and we were unable to recover it. 00:37:38.757 [2024-11-19 21:27:12.463414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.757 [2024-11-19 21:27:12.463450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.757 qpair failed and we were unable to recover it. 00:37:38.757 [2024-11-19 21:27:12.463610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.757 [2024-11-19 21:27:12.463644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.757 qpair failed and we were unable to recover it. 00:37:38.757 [2024-11-19 21:27:12.463758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.757 [2024-11-19 21:27:12.463792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.757 qpair failed and we were unable to recover it. 00:37:38.757 [2024-11-19 21:27:12.463892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.757 [2024-11-19 21:27:12.463926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.757 qpair failed and we were unable to recover it. 00:37:38.757 [2024-11-19 21:27:12.464026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.757 [2024-11-19 21:27:12.464060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.757 qpair failed and we were unable to recover it. 00:37:38.757 [2024-11-19 21:27:12.464186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.757 [2024-11-19 21:27:12.464220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.757 qpair failed and we were unable to recover it. 00:37:38.757 [2024-11-19 21:27:12.464327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.757 [2024-11-19 21:27:12.464361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.757 qpair failed and we were unable to recover it. 00:37:38.757 [2024-11-19 21:27:12.464474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.757 [2024-11-19 21:27:12.464509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.757 qpair failed and we were unable to recover it. 00:37:38.757 [2024-11-19 21:27:12.464613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.757 [2024-11-19 21:27:12.464647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.757 qpair failed and we were unable to recover it. 00:37:38.757 [2024-11-19 21:27:12.464753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.757 [2024-11-19 21:27:12.464787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.757 qpair failed and we were unable to recover it. 00:37:38.757 [2024-11-19 21:27:12.464915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.757 [2024-11-19 21:27:12.464949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:38.757 qpair failed and we were unable to recover it. 00:37:38.757 [2024-11-19 21:27:12.465066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.757 [2024-11-19 21:27:12.465121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.757 qpair failed and we were unable to recover it. 00:37:38.757 [2024-11-19 21:27:12.465233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.757 [2024-11-19 21:27:12.465271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.757 qpair failed and we were unable to recover it. 00:37:38.757 [2024-11-19 21:27:12.465405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.757 [2024-11-19 21:27:12.465440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.757 qpair failed and we were unable to recover it. 00:37:38.757 [2024-11-19 21:27:12.465551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.757 [2024-11-19 21:27:12.465586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.757 qpair failed and we were unable to recover it. 00:37:38.757 [2024-11-19 21:27:12.465728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.757 [2024-11-19 21:27:12.465763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.757 qpair failed and we were unable to recover it. 00:37:38.757 [2024-11-19 21:27:12.465874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.757 [2024-11-19 21:27:12.465909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.757 qpair failed and we were unable to recover it. 00:37:38.757 [2024-11-19 21:27:12.466012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.757 [2024-11-19 21:27:12.466046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:38.757 qpair failed and we were unable to recover it. 00:37:39.023 [2024-11-19 21:27:12.466194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.023 [2024-11-19 21:27:12.466242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.023 qpair failed and we were unable to recover it. 00:37:39.023 [2024-11-19 21:27:12.466360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.023 [2024-11-19 21:27:12.466396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.023 qpair failed and we were unable to recover it. 00:37:39.023 [2024-11-19 21:27:12.466504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.023 [2024-11-19 21:27:12.466539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.023 qpair failed and we were unable to recover it. 00:37:39.023 [2024-11-19 21:27:12.466681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.023 [2024-11-19 21:27:12.466716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.023 qpair failed and we were unable to recover it. 00:37:39.023 [2024-11-19 21:27:12.466831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.023 [2024-11-19 21:27:12.466867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.023 qpair failed and we were unable to recover it. 00:37:39.023 [2024-11-19 21:27:12.467026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.023 [2024-11-19 21:27:12.467080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.023 qpair failed and we were unable to recover it. 00:37:39.023 [2024-11-19 21:27:12.467219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.023 [2024-11-19 21:27:12.467258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.023 qpair failed and we were unable to recover it. 00:37:39.023 [2024-11-19 21:27:12.467372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.023 [2024-11-19 21:27:12.467408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.023 qpair failed and we were unable to recover it. 00:37:39.023 [2024-11-19 21:27:12.467561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.023 [2024-11-19 21:27:12.467596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.023 qpair failed and we were unable to recover it. 00:37:39.023 [2024-11-19 21:27:12.467744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.023 [2024-11-19 21:27:12.467784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.023 qpair failed and we were unable to recover it. 00:37:39.023 [2024-11-19 21:27:12.467926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.023 [2024-11-19 21:27:12.467963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.023 qpair failed and we were unable to recover it. 00:37:39.023 [2024-11-19 21:27:12.468084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.023 [2024-11-19 21:27:12.468119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.023 qpair failed and we were unable to recover it. 00:37:39.023 [2024-11-19 21:27:12.468233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.023 [2024-11-19 21:27:12.468271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.023 qpair failed and we were unable to recover it. 00:37:39.023 [2024-11-19 21:27:12.468408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.023 [2024-11-19 21:27:12.468448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.023 qpair failed and we were unable to recover it. 00:37:39.023 [2024-11-19 21:27:12.468568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.023 [2024-11-19 21:27:12.468604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.023 qpair failed and we were unable to recover it. 00:37:39.023 [2024-11-19 21:27:12.468719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.023 [2024-11-19 21:27:12.468753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.023 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.468898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.468932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.469036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.469083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.469192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.469227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.469341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.469376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.469524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.469559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.469668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.469717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.469954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.469993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.470109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.470145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.470254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.470289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.470421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.470456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.470556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.470591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.470724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.470759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.470876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.470910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.471017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.471058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.471179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.471214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 A controller has encountered a failure and is being reset. 00:37:39.024 [2024-11-19 21:27:12.471365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.471413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.471529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.471565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.471702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.471737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.471840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.471875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.471983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.472018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.472137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.472172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.472276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.472310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.472417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.472451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.472611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.472645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.472755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.472792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.472904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.472941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.473080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.473122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.473229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.473263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.473371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.473405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.473548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.473582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.473684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.473718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.473823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.473857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.474020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.474055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.474186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.474234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.474351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.474388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.474529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.474563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.024 [2024-11-19 21:27:12.474699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.024 [2024-11-19 21:27:12.474733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.024 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.474842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.474876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.475009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.475043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.475154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.475188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.475300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.475335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.475468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.475502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.475612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.475645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.475750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.475785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.475891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.475924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.476058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.476107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.476260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.476294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.476404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.476438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.476601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.476636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.476772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.476805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.476916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.476950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.477051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.477093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.477210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.477244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.477401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.477450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.477565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.477603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.477740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.477775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.477912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.477947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.478083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.478118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.478252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.478287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.478395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.478431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.478545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.478587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.478703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.478738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.478862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.478909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.479058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.479101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.479206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.479240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.479347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.479381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.479513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.479552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.479685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.479719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.479826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.479861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.480001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.480037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.480166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.480215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.480379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.480415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.480529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.480563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.480691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.025 [2024-11-19 21:27:12.480726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.025 qpair failed and we were unable to recover it. 00:37:39.025 [2024-11-19 21:27:12.480856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.480890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.481023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.481057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.481192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.481240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.481356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.481392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.481502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.481538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.481675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.481708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.481823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.481858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.482014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.482048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.482165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.482202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.482308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.482343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.482451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.482486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.482594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.482629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.482730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.482764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.482870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.482904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.483035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.483075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.483178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.483212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.483317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.483350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.483481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.483514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.483618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.483652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.483769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.483803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.483920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.483955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.484074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.484108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.484235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.484284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.484434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.484471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.484637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.484672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.484779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.484815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.484921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.484956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.485090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.485125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.485228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.485262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.485391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.485426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.485548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.485584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.485717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.485751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.485857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.485896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.486028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.486061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.486178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.486212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.486358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.486407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.486567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.486614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.026 [2024-11-19 21:27:12.486730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.026 [2024-11-19 21:27:12.486766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.026 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.486886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.486922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.487057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.487098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.487210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.487244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.487371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.487404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.487533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.487566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.487698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.487732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.487865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.487898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.488011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.488045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.488194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.488228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.488358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.488391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.488497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.488531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.488691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.488725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.488827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.488861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.488990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.489024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.489150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.489198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.489305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.489341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.489484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.489520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.489622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.489656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.489764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.489799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.489931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.489965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.490095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.490131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.490306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.490354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.490522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.490557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.490693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.490727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.490865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.490900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.491007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.491040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.491195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.491243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.491380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.491416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.491557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.491592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.491724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.491758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.491864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.491898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.492029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.492062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.492207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.492241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.492365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.492400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.492537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.492576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.492708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.492743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.492873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.492907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.027 [2024-11-19 21:27:12.493017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.027 [2024-11-19 21:27:12.493051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.027 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.493204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.493252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.493397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.493433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.493568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.493602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.493705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.493740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.493841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.493876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.494007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.494042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.494173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.494221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.494351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.494399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.494537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.494573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.494681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.494715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.494863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.494897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.494997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.495031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.495147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.495181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.495315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.495348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.495454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.495488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.495619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.495653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.495786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.495820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.495924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.495958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.496108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.496143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.496255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.496294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.496438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.496474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.496638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.496673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.496803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.496837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.496975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.497010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.497172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.497206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.497311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.497344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.497445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.497480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.497616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.497650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.497800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.497833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.497939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.497973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.498083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.028 [2024-11-19 21:27:12.498118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.028 qpair failed and we were unable to recover it. 00:37:39.028 [2024-11-19 21:27:12.498234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.498271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.498424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.498473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.498643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.498678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.498797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.498833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.498941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.498976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.499112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.499153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.499268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.499302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.499407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.499441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.499557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.499592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.499733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.499768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.499894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.499928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.500038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.500077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.500184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.500218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.500337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.500386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.500494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.500531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.500644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.500680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.500818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.500853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.500988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.501023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.501192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.501227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.501338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.501373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.501475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.501509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.501615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.501649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.501747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.501782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.501914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.501948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.502046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.502089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.502224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.502259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.502361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.502396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.502531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.502565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.502698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.502733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.502865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.502899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.503027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.503060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.503230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.503264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.503388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.503436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.503579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.503614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.503726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.503761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.503895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.503929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.504090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.504125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.504227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.029 [2024-11-19 21:27:12.504262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.029 qpair failed and we were unable to recover it. 00:37:39.029 [2024-11-19 21:27:12.504368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.504402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.504536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.504571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.504735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.504770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.504875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.504908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.505015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.505048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.505163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.505198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.505306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.505340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.505456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.505499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.505640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.505674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.505822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.505856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.505990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.506024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.506142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.506176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.506308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.506342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.506479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.506513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.506615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.506650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.506754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.506788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.506903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.506937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.507096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.507145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.507305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.507353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.507504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.507553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.507694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.507730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.507843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.507877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.508011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.508045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.508186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.508219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.508330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.508363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.508537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.508572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.508735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.508768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.508869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.508902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.509014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.509048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.509169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.509202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.509340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.509374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.509514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.509549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.509655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.509689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.509820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.509854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.509971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.510032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.510184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.510223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.510336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.510372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.030 [2024-11-19 21:27:12.510511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.030 [2024-11-19 21:27:12.510547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.030 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.510660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.510695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.510800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.510834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.510942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.510976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.511087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.511123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.511227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.511262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.511427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.511462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.511594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.511628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.511730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.511763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.511926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.511962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.512098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.512137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.512244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.512278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.512384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.512419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.512551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.512585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.512687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.512720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.512857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.512891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.513001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.513036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.513177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.513214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.513337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.513386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.513534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.513570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.513715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.513750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.513860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.513894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.514029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.514064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.514202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.514236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.514351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.514386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.514545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.514580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.514691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.514724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.514835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.514869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.514984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.515031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.515160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.515197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.515327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.515362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.515463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.515497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.515716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.515750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.515914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.515949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.516044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.516083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.516189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.516223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.516376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.031 [2024-11-19 21:27:12.516424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.031 qpair failed and we were unable to recover it. 00:37:39.031 [2024-11-19 21:27:12.516576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.516618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.516727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.516761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.516894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.516928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.517063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.517108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.517221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.517255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.517397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.517431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.517531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.517565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.517689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.517723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.517834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.517868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.518011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.518044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.518158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.518193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.518321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.518370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.518532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.518569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.518687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.518722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.518865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.518900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.519042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.519084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.519218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.519253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.519385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.519419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.519547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.519581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.519698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.519733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.519910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.519957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.520099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.520136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.520274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.520308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.520437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.520471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.520604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.520638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.520776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.520809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.520949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.520984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.521103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.521149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.521264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.521300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.521431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.521465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.521626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.521660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.521774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.521807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.032 qpair failed and we were unable to recover it. 00:37:39.032 [2024-11-19 21:27:12.521965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.032 [2024-11-19 21:27:12.521999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.522134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.522169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.522295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.522328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.522460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.522494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.522632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.522667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.522776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.522811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.522916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.522950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.523051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.523092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.523221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.523260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.523394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.523428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.523563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.523597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.523726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.523759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.523878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.523913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.524025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.524059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.524171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.524205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.524309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.524343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.524475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.524509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.524648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.524681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.524789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.524823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.524943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.524991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.525113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.525151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.525288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.525324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.525489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.525524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.525672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.525708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.525843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.525879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.526012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.526046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.526187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.526222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.526326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.526362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.526472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.526506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.526676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.526710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.526814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.526849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.527005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.527039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.527153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.527190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.527344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.527392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:39.033 qpair failed and we were unable to recover it. 00:37:39.033 [2024-11-19 21:27:12.527628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.033 [2024-11-19 21:27:12.527684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:39.033 [2024-11-19 21:27:12.527719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:37:39.033 [2024-11-19 21:27:12.527762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:37:39.033 [2024-11-19 21:27:12.527792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:37:39.033 [2024-11-19 21:27:12.527819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:37:39.033 [2024-11-19 21:27:12.527849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:37:39.033 Unable to reset the controller. 00:37:39.033 [2024-11-19 21:27:12.556053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:39.033 [2024-11-19 21:27:12.556124] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:39.033 [2024-11-19 21:27:12.556148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:39.033 [2024-11-19 21:27:12.556178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:39.034 [2024-11-19 21:27:12.556206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:39.034 [2024-11-19 21:27:12.559022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:39.034 [2024-11-19 21:27:12.561549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:39.034 [2024-11-19 21:27:12.561585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:39.034 [2024-11-19 21:27:12.561588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:39.600 Malloc0 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:39.600 [2024-11-19 21:27:13.347784] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:39.600 [2024-11-19 21:27:13.377691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.600 21:27:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3171178 00:37:40.166 Controller properly reset. 00:37:45.431 Initializing NVMe Controllers 00:37:45.431 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:45.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:45.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:45.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:45.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:45.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:45.431 Initialization complete. Launching workers. 00:37:45.431 Starting thread on core 1 00:37:45.431 Starting thread on core 2 00:37:45.431 Starting thread on core 3 00:37:45.431 Starting thread on core 0 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:45.431 00:37:45.431 real 0m11.555s 00:37:45.431 user 0m36.523s 00:37:45.431 sys 0m7.667s 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:45.431 ************************************ 00:37:45.431 END TEST nvmf_target_disconnect_tc2 00:37:45.431 ************************************ 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:45.431 rmmod nvme_tcp 00:37:45.431 rmmod nvme_fabrics 00:37:45.431 rmmod nvme_keyring 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3171794 ']' 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3171794 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3171794 ']' 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3171794 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3171794 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3171794' 00:37:45.431 killing process with pid 3171794 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3171794 00:37:45.431 21:27:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3171794 00:37:46.366 21:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:46.366 21:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:46.366 21:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:46.366 21:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:37:46.366 21:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:37:46.366 21:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:46.366 21:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:37:46.366 21:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:46.366 21:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:46.366 21:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:46.366 21:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:46.366 21:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:48.270 21:27:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:48.270 00:37:48.270 real 0m17.536s 00:37:48.270 user 1m4.596s 00:37:48.270 sys 0m10.312s 00:37:48.270 21:27:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:48.270 21:27:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:48.270 ************************************ 00:37:48.270 END TEST nvmf_target_disconnect 00:37:48.270 ************************************ 00:37:48.270 21:27:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:48.270 00:37:48.270 real 7m38.003s 00:37:48.270 user 19m47.923s 00:37:48.270 sys 1m34.331s 00:37:48.270 21:27:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:48.270 21:27:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.270 ************************************ 00:37:48.270 END TEST nvmf_host 00:37:48.270 ************************************ 00:37:48.270 21:27:21 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:37:48.270 21:27:21 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:37:48.270 21:27:21 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:48.270 21:27:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:48.270 21:27:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:48.270 21:27:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:48.270 ************************************ 00:37:48.270 START TEST nvmf_target_core_interrupt_mode 00:37:48.270 ************************************ 00:37:48.270 21:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:48.270 * Looking for test storage... 00:37:48.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:37:48.271 21:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:48.271 21:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:37:48.271 21:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:48.271 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:48.271 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:48.271 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:48.271 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:48.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.531 --rc genhtml_branch_coverage=1 00:37:48.531 --rc genhtml_function_coverage=1 00:37:48.531 --rc genhtml_legend=1 00:37:48.531 --rc geninfo_all_blocks=1 00:37:48.531 --rc geninfo_unexecuted_blocks=1 00:37:48.531 00:37:48.531 ' 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:48.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.531 --rc genhtml_branch_coverage=1 00:37:48.531 --rc genhtml_function_coverage=1 00:37:48.531 --rc genhtml_legend=1 00:37:48.531 --rc geninfo_all_blocks=1 00:37:48.531 --rc geninfo_unexecuted_blocks=1 00:37:48.531 00:37:48.531 ' 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:48.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.531 --rc genhtml_branch_coverage=1 00:37:48.531 --rc genhtml_function_coverage=1 00:37:48.531 --rc genhtml_legend=1 00:37:48.531 --rc geninfo_all_blocks=1 00:37:48.531 --rc geninfo_unexecuted_blocks=1 00:37:48.531 00:37:48.531 ' 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:48.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.531 --rc genhtml_branch_coverage=1 00:37:48.531 --rc genhtml_function_coverage=1 00:37:48.531 --rc genhtml_legend=1 00:37:48.531 --rc geninfo_all_blocks=1 00:37:48.531 --rc geninfo_unexecuted_blocks=1 00:37:48.531 00:37:48.531 ' 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:48.531 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:48.532 ************************************ 00:37:48.532 START TEST nvmf_abort 00:37:48.532 ************************************ 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:48.532 * Looking for test storage... 00:37:48.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:48.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.532 --rc genhtml_branch_coverage=1 00:37:48.532 --rc genhtml_function_coverage=1 00:37:48.532 --rc genhtml_legend=1 00:37:48.532 --rc geninfo_all_blocks=1 00:37:48.532 --rc geninfo_unexecuted_blocks=1 00:37:48.532 00:37:48.532 ' 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:48.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.532 --rc genhtml_branch_coverage=1 00:37:48.532 --rc genhtml_function_coverage=1 00:37:48.532 --rc genhtml_legend=1 00:37:48.532 --rc geninfo_all_blocks=1 00:37:48.532 --rc geninfo_unexecuted_blocks=1 00:37:48.532 00:37:48.532 ' 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:48.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.532 --rc genhtml_branch_coverage=1 00:37:48.532 --rc genhtml_function_coverage=1 00:37:48.532 --rc genhtml_legend=1 00:37:48.532 --rc geninfo_all_blocks=1 00:37:48.532 --rc geninfo_unexecuted_blocks=1 00:37:48.532 00:37:48.532 ' 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:48.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.532 --rc genhtml_branch_coverage=1 00:37:48.532 --rc genhtml_function_coverage=1 00:37:48.532 --rc genhtml_legend=1 00:37:48.532 --rc geninfo_all_blocks=1 00:37:48.532 --rc geninfo_unexecuted_blocks=1 00:37:48.532 00:37:48.532 ' 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.532 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:37:48.533 21:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:51.065 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:51.065 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:51.065 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:51.065 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:51.065 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:51.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:51.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:37:51.066 00:37:51.066 --- 10.0.0.2 ping statistics --- 00:37:51.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:51.066 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:51.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:51.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:37:51.066 00:37:51.066 --- 10.0.0.1 ping statistics --- 00:37:51.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:51.066 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3174677 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3174677 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3174677 ']' 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:51.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:51.066 21:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.066 [2024-11-19 21:27:24.559946] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:51.066 [2024-11-19 21:27:24.562610] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:37:51.066 [2024-11-19 21:27:24.562726] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:51.066 [2024-11-19 21:27:24.709235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:51.066 [2024-11-19 21:27:24.843565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:51.066 [2024-11-19 21:27:24.843643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:51.066 [2024-11-19 21:27:24.843672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:51.066 [2024-11-19 21:27:24.843693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:51.066 [2024-11-19 21:27:24.843715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:51.066 [2024-11-19 21:27:24.846329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:51.066 [2024-11-19 21:27:24.846380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:51.066 [2024-11-19 21:27:24.846403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:51.633 [2024-11-19 21:27:25.210655] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:51.633 [2024-11-19 21:27:25.211834] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:51.633 [2024-11-19 21:27:25.212657] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:51.633 [2024-11-19 21:27:25.212997] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:51.907 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:51.907 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:37:51.907 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:51.907 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:51.907 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.907 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:51.907 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:37:51.907 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.907 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.907 [2024-11-19 21:27:25.551470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:51.907 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.908 Malloc0 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.908 Delay0 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.908 [2024-11-19 21:27:25.683648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.908 21:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:37:52.166 [2024-11-19 21:27:25.879239] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:54.695 Initializing NVMe Controllers 00:37:54.695 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:54.695 controller IO queue size 128 less than required 00:37:54.695 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:37:54.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:37:54.695 Initialization complete. Launching workers. 00:37:54.695 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 22695 00:37:54.695 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 22752, failed to submit 66 00:37:54.695 success 22695, unsuccessful 57, failed 0 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:54.695 rmmod nvme_tcp 00:37:54.695 rmmod nvme_fabrics 00:37:54.695 rmmod nvme_keyring 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3174677 ']' 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3174677 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3174677 ']' 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3174677 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3174677 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3174677' 00:37:54.695 killing process with pid 3174677 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3174677 00:37:54.695 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3174677 00:37:56.071 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:56.071 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:56.071 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:56.071 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:37:56.071 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:37:56.071 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:56.071 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:37:56.071 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:56.071 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:56.071 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:56.071 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:56.071 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:57.973 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:57.973 00:37:57.973 real 0m9.377s 00:37:57.973 user 0m11.838s 00:37:57.973 sys 0m3.135s 00:37:57.973 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:57.973 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:57.973 ************************************ 00:37:57.973 END TEST nvmf_abort 00:37:57.973 ************************************ 00:37:57.973 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:57.973 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:57.973 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:57.973 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:57.973 ************************************ 00:37:57.973 START TEST nvmf_ns_hotplug_stress 00:37:57.973 ************************************ 00:37:57.973 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:57.973 * Looking for test storage... 00:37:57.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:57.973 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:57.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.974 --rc genhtml_branch_coverage=1 00:37:57.974 --rc genhtml_function_coverage=1 00:37:57.974 --rc genhtml_legend=1 00:37:57.974 --rc geninfo_all_blocks=1 00:37:57.974 --rc geninfo_unexecuted_blocks=1 00:37:57.974 00:37:57.974 ' 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:57.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.974 --rc genhtml_branch_coverage=1 00:37:57.974 --rc genhtml_function_coverage=1 00:37:57.974 --rc genhtml_legend=1 00:37:57.974 --rc geninfo_all_blocks=1 00:37:57.974 --rc geninfo_unexecuted_blocks=1 00:37:57.974 00:37:57.974 ' 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:57.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.974 --rc genhtml_branch_coverage=1 00:37:57.974 --rc genhtml_function_coverage=1 00:37:57.974 --rc genhtml_legend=1 00:37:57.974 --rc geninfo_all_blocks=1 00:37:57.974 --rc geninfo_unexecuted_blocks=1 00:37:57.974 00:37:57.974 ' 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:57.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:57.974 --rc genhtml_branch_coverage=1 00:37:57.974 --rc genhtml_function_coverage=1 00:37:57.974 --rc genhtml_legend=1 00:37:57.974 --rc geninfo_all_blocks=1 00:37:57.974 --rc geninfo_unexecuted_blocks=1 00:37:57.974 00:37:57.974 ' 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:57.974 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:37:57.975 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:59.877 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:59.877 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:59.877 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:59.877 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:59.878 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:59.878 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:00.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:00.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:38:00.137 00:38:00.137 --- 10.0.0.2 ping statistics --- 00:38:00.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:00.137 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:00.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:00.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:38:00.137 00:38:00.137 --- 10.0.0.1 ping statistics --- 00:38:00.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:00.137 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3177159 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3177159 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3177159 ']' 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:00.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:00.137 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:00.137 [2024-11-19 21:27:33.834943] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:00.137 [2024-11-19 21:27:33.837555] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:38:00.137 [2024-11-19 21:27:33.837647] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:00.395 [2024-11-19 21:27:33.993751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:00.395 [2024-11-19 21:27:34.132716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:00.395 [2024-11-19 21:27:34.132787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:00.395 [2024-11-19 21:27:34.132815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:00.395 [2024-11-19 21:27:34.132836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:00.395 [2024-11-19 21:27:34.132860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:00.395 [2024-11-19 21:27:34.135461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:00.395 [2024-11-19 21:27:34.135549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:00.395 [2024-11-19 21:27:34.135571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:00.962 [2024-11-19 21:27:34.498174] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:00.962 [2024-11-19 21:27:34.499289] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:00.962 [2024-11-19 21:27:34.500104] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:00.962 [2024-11-19 21:27:34.500466] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:01.220 21:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:01.220 21:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:38:01.220 21:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:01.220 21:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:01.220 21:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:01.220 21:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:01.220 21:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:38:01.220 21:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:01.479 [2024-11-19 21:27:35.120615] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:01.479 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:01.738 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:01.996 [2024-11-19 21:27:35.681147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:01.996 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:02.255 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:38:02.513 Malloc0 00:38:02.513 21:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:03.081 Delay0 00:38:03.081 21:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:03.339 21:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:38:03.597 NULL1 00:38:03.597 21:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:38:03.854 21:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3177694 00:38:03.854 21:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:03.854 21:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:03.854 21:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:38:04.112 21:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:04.370 21:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:38:04.370 21:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:38:04.628 true 00:38:04.628 21:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:04.628 21:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:04.885 21:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:05.142 21:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:38:05.142 21:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:38:05.398 true 00:38:05.398 21:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:05.398 21:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:05.655 21:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:05.913 21:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:38:05.913 21:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:38:06.171 true 00:38:06.171 21:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:06.171 21:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:07.104 Read completed with error (sct=0, sc=11) 00:38:07.104 21:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:07.362 21:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:38:07.362 21:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:38:07.620 true 00:38:07.876 21:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:07.876 21:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:08.133 21:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:08.391 21:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:38:08.391 21:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:38:08.649 true 00:38:08.649 21:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:08.649 21:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:08.906 21:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:09.163 21:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:38:09.163 21:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:38:09.421 true 00:38:09.421 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:09.421 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:10.354 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:10.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:10.611 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:38:10.611 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:38:10.869 true 00:38:10.869 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:10.869 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:11.126 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:11.384 21:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:38:11.384 21:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:38:11.641 true 00:38:11.641 21:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:11.642 21:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:11.899 21:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:12.157 21:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:38:12.157 21:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:38:12.415 true 00:38:12.415 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:12.415 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:13.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:13.787 21:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:13.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:13.787 21:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:38:13.787 21:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:38:14.045 true 00:38:14.045 21:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:14.045 21:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:14.302 21:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:14.560 21:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:38:14.560 21:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:38:14.818 true 00:38:14.818 21:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:14.818 21:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:15.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:15.750 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:16.007 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:38:16.007 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:38:16.265 true 00:38:16.265 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:16.265 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:16.522 21:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:16.780 21:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:38:16.780 21:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:38:17.038 true 00:38:17.038 21:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:17.038 21:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:17.298 21:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:17.616 21:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:38:17.616 21:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:38:17.914 true 00:38:17.914 21:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:17.914 21:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:18.847 21:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:18.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:19.103 21:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:38:19.103 21:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:38:19.360 true 00:38:19.360 21:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:19.360 21:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:19.618 21:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:19.875 21:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:38:19.875 21:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:38:20.133 true 00:38:20.133 21:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:20.133 21:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:20.391 21:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:20.649 21:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:38:20.649 21:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:38:20.907 true 00:38:20.907 21:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:20.907 21:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:21.840 21:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:22.098 21:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:38:22.098 21:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:38:22.355 true 00:38:22.355 21:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:22.355 21:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:22.613 21:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:22.869 21:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:38:22.869 21:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:38:23.126 true 00:38:23.127 21:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:23.127 21:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:23.384 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:23.640 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:38:23.640 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:38:24.203 true 00:38:24.203 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:24.203 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:25.135 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:25.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:25.135 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:38:25.135 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:38:25.393 true 00:38:25.649 21:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:25.649 21:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:25.905 21:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:26.163 21:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:38:26.163 21:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:38:26.421 true 00:38:26.421 21:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:26.421 21:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:27.354 21:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:27.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:27.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:27.354 21:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:38:27.354 21:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:38:27.612 true 00:38:27.612 21:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:27.612 21:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:27.870 21:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:28.127 21:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:38:28.127 21:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:38:28.385 true 00:38:28.643 21:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:28.643 21:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:29.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:29.209 21:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:29.466 21:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:38:29.466 21:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:38:29.724 true 00:38:29.724 21:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:29.724 21:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:29.981 21:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:30.239 21:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:38:30.239 21:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:38:30.498 true 00:38:30.755 21:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:30.755 21:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:31.012 21:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:31.269 21:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:38:31.269 21:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:38:31.526 true 00:38:31.526 21:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:31.526 21:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:32.458 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:32.458 21:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:32.715 21:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:38:32.715 21:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:38:32.973 true 00:38:32.973 21:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:32.973 21:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:33.230 21:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:33.488 21:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:38:33.488 21:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:38:33.746 true 00:38:33.746 21:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:33.746 21:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:34.679 Initializing NVMe Controllers 00:38:34.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:34.679 Controller IO queue size 128, less than required. 00:38:34.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:34.679 Controller IO queue size 128, less than required. 00:38:34.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:34.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:34.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:34.679 Initialization complete. Launching workers. 00:38:34.679 ======================================================== 00:38:34.679 Latency(us) 00:38:34.679 Device Information : IOPS MiB/s Average min max 00:38:34.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 395.70 0.19 131411.46 3202.35 1017611.15 00:38:34.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6281.03 3.07 20312.97 3153.57 392966.86 00:38:34.679 ======================================================== 00:38:34.679 Total : 6676.73 3.26 26897.28 3153.57 1017611.15 00:38:34.679 00:38:34.679 21:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:34.937 21:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:38:34.937 21:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:38:35.195 true 00:38:35.195 21:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3177694 00:38:35.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3177694) - No such process 00:38:35.195 21:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3177694 00:38:35.195 21:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:35.452 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:35.710 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:38:35.710 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:38:35.710 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:38:35.710 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:35.710 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:38:35.968 null0 00:38:35.968 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:35.968 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:35.968 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:38:36.226 null1 00:38:36.226 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:36.226 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:36.226 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:38:36.484 null2 00:38:36.484 21:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:36.484 21:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:36.484 21:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:38:36.742 null3 00:38:36.742 21:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:36.742 21:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:36.742 21:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:38:37.000 null4 00:38:37.000 21:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:37.000 21:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:37.000 21:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:38:37.258 null5 00:38:37.259 21:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:37.259 21:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:37.259 21:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:38:37.517 null6 00:38:37.517 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:37.517 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:37.517 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:38:37.777 null7 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:37.777 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.778 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:37.778 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:37.778 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:38:37.778 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:37.778 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:37.778 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:38:37.778 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:37.778 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3181723 3181724 3181725 3181727 3181730 3181732 3181734 3181736 00:38:37.778 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.778 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:38.072 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:38.072 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:38.072 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:38.072 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:38.072 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:38.072 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:38.072 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:38.072 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:38.356 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.356 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.356 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.356 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.356 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:38.356 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:38.357 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.357 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.357 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:38.357 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.357 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.357 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:38.357 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.357 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.357 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:38.357 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.357 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.357 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.357 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.357 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:38.357 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:38.357 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.357 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.357 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:38.615 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:38.615 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:38.615 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:38.615 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:38.615 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:38.615 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:38.615 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:38.874 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.132 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:39.391 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:39.391 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:39.391 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:39.391 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:39.391 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:39.391 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:39.391 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:39.391 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.651 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:39.910 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:39.910 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:39.910 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:39.910 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:39.910 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:39.910 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:39.910 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:39.910 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.169 21:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:40.428 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:40.428 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:40.428 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:40.428 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:40.428 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:40.428 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:40.428 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:40.428 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:40.994 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.994 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.994 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:40.994 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.994 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.994 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:40.994 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.994 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.994 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.994 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.994 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:40.995 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:40.995 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.995 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.995 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:40.995 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.995 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.995 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:40.995 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.995 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.995 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:40.995 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.995 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.995 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:41.253 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:41.253 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:41.253 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:41.253 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:41.253 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:41.253 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:41.253 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:41.253 21:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.511 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:41.770 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:41.770 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:41.770 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:41.770 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:41.770 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:41.770 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:41.770 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:41.770 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:42.028 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.028 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.028 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:42.028 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.028 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.028 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:42.028 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.028 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.028 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:42.028 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.028 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.028 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:42.028 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.028 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.028 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:42.028 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.028 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.028 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:42.028 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.028 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.028 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:42.286 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.286 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.286 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:42.286 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:42.286 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:42.286 21:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:42.286 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:42.286 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:42.286 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:42.286 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:42.545 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:42.545 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.545 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.545 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:42.545 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.545 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.545 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:42.545 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.545 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.545 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:42.545 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.545 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.545 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:42.545 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.545 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.545 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.545 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:42.545 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.545 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:42.545 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.546 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.546 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:42.804 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:42.804 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:42.804 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:42.804 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:42.804 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:42.804 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:42.804 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:43.062 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:43.062 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:43.062 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:43.062 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:43.320 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.320 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.320 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.320 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:43.320 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.320 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:43.320 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.320 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.320 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:43.320 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.320 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.320 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:43.320 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.320 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.320 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:43.320 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.320 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.320 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:43.320 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.320 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.320 21:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:43.320 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.320 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.320 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:43.578 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:43.578 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:43.578 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:43.578 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:43.578 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:43.578 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:43.578 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:43.578 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:43.835 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.835 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:43.836 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:43.836 rmmod nvme_tcp 00:38:43.836 rmmod nvme_fabrics 00:38:43.836 rmmod nvme_keyring 00:38:44.093 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:44.093 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:38:44.093 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:38:44.093 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3177159 ']' 00:38:44.093 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3177159 00:38:44.093 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3177159 ']' 00:38:44.093 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3177159 00:38:44.093 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:38:44.093 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:44.093 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3177159 00:38:44.093 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:44.093 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:44.093 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3177159' 00:38:44.093 killing process with pid 3177159 00:38:44.093 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3177159 00:38:44.093 21:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3177159 00:38:45.469 21:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:45.469 21:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:45.469 21:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:45.469 21:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:38:45.469 21:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:38:45.469 21:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:45.469 21:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:38:45.469 21:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:45.469 21:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:45.469 21:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:45.469 21:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:45.469 21:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:47.376 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:47.376 00:38:47.376 real 0m49.439s 00:38:47.376 user 3m23.166s 00:38:47.376 sys 0m21.837s 00:38:47.376 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:47.376 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:47.376 ************************************ 00:38:47.376 END TEST nvmf_ns_hotplug_stress 00:38:47.376 ************************************ 00:38:47.376 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:47.376 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:47.376 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:47.376 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:47.376 ************************************ 00:38:47.376 START TEST nvmf_delete_subsystem 00:38:47.376 ************************************ 00:38:47.376 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:47.376 * Looking for test storage... 00:38:47.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:47.376 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:47.376 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:38:47.376 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:47.635 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:47.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.636 --rc genhtml_branch_coverage=1 00:38:47.636 --rc genhtml_function_coverage=1 00:38:47.636 --rc genhtml_legend=1 00:38:47.636 --rc geninfo_all_blocks=1 00:38:47.636 --rc geninfo_unexecuted_blocks=1 00:38:47.636 00:38:47.636 ' 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:47.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.636 --rc genhtml_branch_coverage=1 00:38:47.636 --rc genhtml_function_coverage=1 00:38:47.636 --rc genhtml_legend=1 00:38:47.636 --rc geninfo_all_blocks=1 00:38:47.636 --rc geninfo_unexecuted_blocks=1 00:38:47.636 00:38:47.636 ' 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:47.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.636 --rc genhtml_branch_coverage=1 00:38:47.636 --rc genhtml_function_coverage=1 00:38:47.636 --rc genhtml_legend=1 00:38:47.636 --rc geninfo_all_blocks=1 00:38:47.636 --rc geninfo_unexecuted_blocks=1 00:38:47.636 00:38:47.636 ' 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:47.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.636 --rc genhtml_branch_coverage=1 00:38:47.636 --rc genhtml_function_coverage=1 00:38:47.636 --rc genhtml_legend=1 00:38:47.636 --rc geninfo_all_blocks=1 00:38:47.636 --rc geninfo_unexecuted_blocks=1 00:38:47.636 00:38:47.636 ' 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:38:47.636 21:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:49.538 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:49.538 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:38:49.538 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:49.538 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:49.538 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:49.538 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:49.538 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:49.539 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:49.539 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:49.539 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:49.539 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:49.539 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:49.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:49.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:38:49.799 00:38:49.799 --- 10.0.0.2 ping statistics --- 00:38:49.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:49.799 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:49.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:49.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:38:49.799 00:38:49.799 --- 10.0.0.1 ping statistics --- 00:38:49.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:49.799 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3184615 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3184615 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3184615 ']' 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:49.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:49.799 21:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:49.799 [2024-11-19 21:28:23.497480] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:49.799 [2024-11-19 21:28:23.500010] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:38:49.799 [2024-11-19 21:28:23.500155] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:50.057 [2024-11-19 21:28:23.661260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:50.057 [2024-11-19 21:28:23.799155] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:50.057 [2024-11-19 21:28:23.799236] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:50.057 [2024-11-19 21:28:23.799273] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:50.057 [2024-11-19 21:28:23.799295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:50.057 [2024-11-19 21:28:23.799326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:50.057 [2024-11-19 21:28:23.801842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:50.057 [2024-11-19 21:28:23.801850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:50.624 [2024-11-19 21:28:24.165867] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:50.624 [2024-11-19 21:28:24.166632] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:50.624 [2024-11-19 21:28:24.166991] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:50.882 [2024-11-19 21:28:24.490953] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:50.882 [2024-11-19 21:28:24.511249] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:50.882 NULL1 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:50.882 Delay0 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3184767 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:50.882 21:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:38:50.882 [2024-11-19 21:28:24.640797] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:52.780 21:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:52.780 21:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.780 21:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Write completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Write completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Write completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Write completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Write completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Write completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Write completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Write completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Write completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Write completed with error (sct=0, sc=8) 00:38:53.038 Write completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.038 Write completed with error (sct=0, sc=8) 00:38:53.038 Read completed with error (sct=0, sc=8) 00:38:53.038 starting I/O failed: -6 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 starting I/O failed: -6 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 starting I/O failed: -6 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 starting I/O failed: -6 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 starting I/O failed: -6 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 starting I/O failed: -6 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 starting I/O failed: -6 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 starting I/O failed: -6 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 starting I/O failed: -6 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 starting I/O failed: -6 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Write completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.039 Read completed with error (sct=0, sc=8) 00:38:53.973 [2024-11-19 21:28:27.712368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015c00 is same with the state(6) to be set 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 [2024-11-19 21:28:27.750491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 [2024-11-19 21:28:27.752132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 [2024-11-19 21:28:27.753613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Read completed with error (sct=0, sc=8) 00:38:53.973 Write completed with error (sct=0, sc=8) 00:38:53.973 [2024-11-19 21:28:27.753995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:38:53.973 21:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:53.973 21:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:38:53.973 21:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3184767 00:38:53.974 21:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:38:53.974 Initializing NVMe Controllers 00:38:53.974 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:53.974 Controller IO queue size 128, less than required. 00:38:53.974 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:53.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:53.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:53.974 Initialization complete. Launching workers. 00:38:53.974 ======================================================== 00:38:53.974 Latency(us) 00:38:53.974 Device Information : IOPS MiB/s Average min max 00:38:53.974 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.66 0.08 893913.18 770.19 1017752.62 00:38:53.974 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.70 0.08 906259.49 887.68 1017945.29 00:38:53.974 ======================================================== 00:38:53.974 Total : 343.36 0.17 900015.17 770.19 1017945.29 00:38:53.974 00:38:53.974 [2024-11-19 21:28:27.758890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015c00 (9): Bad file descriptor 00:38:53.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3184767 00:38:54.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3184767) - No such process 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3184767 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3184767 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3184767 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:54.540 [2024-11-19 21:28:28.275209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3185291 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3185291 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:54.540 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:54.798 [2024-11-19 21:28:28.394770] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:55.056 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:55.056 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3185291 00:38:55.056 21:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:55.622 21:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:55.622 21:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3185291 00:38:55.622 21:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:56.186 21:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:56.186 21:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3185291 00:38:56.186 21:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:56.750 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:56.750 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3185291 00:38:56.750 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:57.314 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:57.315 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3185291 00:38:57.315 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:57.572 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:57.572 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3185291 00:38:57.572 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:57.830 Initializing NVMe Controllers 00:38:57.830 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:57.830 Controller IO queue size 128, less than required. 00:38:57.830 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:57.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:57.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:57.830 Initialization complete. Launching workers. 00:38:57.830 ======================================================== 00:38:57.830 Latency(us) 00:38:57.830 Device Information : IOPS MiB/s Average min max 00:38:57.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1006267.02 1000287.17 1017472.18 00:38:57.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005795.01 1000318.91 1014640.71 00:38:57.830 ======================================================== 00:38:57.830 Total : 256.00 0.12 1006031.02 1000287.17 1017472.18 00:38:57.830 00:38:58.088 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:58.088 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3185291 00:38:58.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3185291) - No such process 00:38:58.088 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3185291 00:38:58.088 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:38:58.088 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:38:58.088 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:58.088 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:38:58.088 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:58.088 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:38:58.088 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:58.088 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:58.088 rmmod nvme_tcp 00:38:58.088 rmmod nvme_fabrics 00:38:58.088 rmmod nvme_keyring 00:38:58.088 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:58.088 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:38:58.088 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:38:58.088 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3184615 ']' 00:38:58.088 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3184615 00:38:58.088 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3184615 ']' 00:38:58.088 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3184615 00:38:58.088 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:38:58.088 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:58.088 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3184615 00:38:58.347 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:58.347 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:58.347 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3184615' 00:38:58.347 killing process with pid 3184615 00:38:58.347 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3184615 00:38:58.347 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3184615 00:38:59.283 21:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:59.283 21:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:59.283 21:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:59.283 21:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:38:59.283 21:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:38:59.283 21:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:59.283 21:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:38:59.283 21:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:59.283 21:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:59.283 21:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:59.283 21:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:59.283 21:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:01.817 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:01.817 00:39:01.817 real 0m14.067s 00:39:01.817 user 0m26.159s 00:39:01.817 sys 0m3.976s 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:01.818 ************************************ 00:39:01.818 END TEST nvmf_delete_subsystem 00:39:01.818 ************************************ 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:01.818 ************************************ 00:39:01.818 START TEST nvmf_host_management 00:39:01.818 ************************************ 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:01.818 * Looking for test storage... 00:39:01.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:01.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.818 --rc genhtml_branch_coverage=1 00:39:01.818 --rc genhtml_function_coverage=1 00:39:01.818 --rc genhtml_legend=1 00:39:01.818 --rc geninfo_all_blocks=1 00:39:01.818 --rc geninfo_unexecuted_blocks=1 00:39:01.818 00:39:01.818 ' 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:01.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.818 --rc genhtml_branch_coverage=1 00:39:01.818 --rc genhtml_function_coverage=1 00:39:01.818 --rc genhtml_legend=1 00:39:01.818 --rc geninfo_all_blocks=1 00:39:01.818 --rc geninfo_unexecuted_blocks=1 00:39:01.818 00:39:01.818 ' 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:01.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.818 --rc genhtml_branch_coverage=1 00:39:01.818 --rc genhtml_function_coverage=1 00:39:01.818 --rc genhtml_legend=1 00:39:01.818 --rc geninfo_all_blocks=1 00:39:01.818 --rc geninfo_unexecuted_blocks=1 00:39:01.818 00:39:01.818 ' 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:01.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.818 --rc genhtml_branch_coverage=1 00:39:01.818 --rc genhtml_function_coverage=1 00:39:01.818 --rc genhtml_legend=1 00:39:01.818 --rc geninfo_all_blocks=1 00:39:01.818 --rc geninfo_unexecuted_blocks=1 00:39:01.818 00:39:01.818 ' 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.818 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:39:01.819 21:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:03.723 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:03.723 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:03.723 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:03.723 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:03.723 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:03.724 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:03.724 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:03.724 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:03.724 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:03.724 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:03.982 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:03.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:03.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:39:03.983 00:39:03.983 --- 10.0.0.2 ping statistics --- 00:39:03.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:03.983 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:03.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:03.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:39:03.983 00:39:03.983 --- 10.0.0.1 ping statistics --- 00:39:03.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:03.983 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3187755 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3187755 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3187755 ']' 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:03.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:03.983 21:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:03.983 [2024-11-19 21:28:37.650673] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:03.983 [2024-11-19 21:28:37.653223] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:39:03.983 [2024-11-19 21:28:37.653325] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:04.241 [2024-11-19 21:28:37.798306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:04.241 [2024-11-19 21:28:37.923299] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:04.241 [2024-11-19 21:28:37.923384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:04.241 [2024-11-19 21:28:37.923408] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:04.241 [2024-11-19 21:28:37.923426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:04.241 [2024-11-19 21:28:37.923447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:04.241 [2024-11-19 21:28:37.925950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:04.241 [2024-11-19 21:28:37.925999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:04.241 [2024-11-19 21:28:37.929098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:04.241 [2024-11-19 21:28:37.929102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:04.500 [2024-11-19 21:28:38.252745] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:04.500 [2024-11-19 21:28:38.263385] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:04.500 [2024-11-19 21:28:38.263518] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:04.500 [2024-11-19 21:28:38.264383] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:04.500 [2024-11-19 21:28:38.264758] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:05.067 [2024-11-19 21:28:38.658169] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:05.067 Malloc0 00:39:05.067 [2024-11-19 21:28:38.782380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3187926 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3187926 /var/tmp/bdevperf.sock 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3187926 ']' 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:05.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:05.067 { 00:39:05.067 "params": { 00:39:05.067 "name": "Nvme$subsystem", 00:39:05.067 "trtype": "$TEST_TRANSPORT", 00:39:05.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:05.067 "adrfam": "ipv4", 00:39:05.067 "trsvcid": "$NVMF_PORT", 00:39:05.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:05.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:05.067 "hdgst": ${hdgst:-false}, 00:39:05.067 "ddgst": ${ddgst:-false} 00:39:05.067 }, 00:39:05.067 "method": "bdev_nvme_attach_controller" 00:39:05.067 } 00:39:05.067 EOF 00:39:05.067 )") 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:39:05.067 21:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:05.067 "params": { 00:39:05.067 "name": "Nvme0", 00:39:05.067 "trtype": "tcp", 00:39:05.067 "traddr": "10.0.0.2", 00:39:05.067 "adrfam": "ipv4", 00:39:05.067 "trsvcid": "4420", 00:39:05.067 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:05.067 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:05.067 "hdgst": false, 00:39:05.067 "ddgst": false 00:39:05.067 }, 00:39:05.067 "method": "bdev_nvme_attach_controller" 00:39:05.067 }' 00:39:05.326 [2024-11-19 21:28:38.904554] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:39:05.326 [2024-11-19 21:28:38.904716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3187926 ] 00:39:05.326 [2024-11-19 21:28:39.040844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:05.583 [2024-11-19 21:28:39.168885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:06.152 Running I/O for 10 seconds... 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=195 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 195 -ge 100 ']' 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:06.152 [2024-11-19 21:28:39.906098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:06.152 [2024-11-19 21:28:39.906190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:06.152 [2024-11-19 21:28:39.906212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:06.152 [2024-11-19 21:28:39.906231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:06.152 [2024-11-19 21:28:39.906248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:06.152 [2024-11-19 21:28:39.917930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:06.152 [2024-11-19 21:28:39.917982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.152 [2024-11-19 21:28:39.918011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:06.152 [2024-11-19 21:28:39.918033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.152 [2024-11-19 21:28:39.918055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:06.152 [2024-11-19 21:28:39.918093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.152 [2024-11-19 21:28:39.918116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:06.152 [2024-11-19 21:28:39.918136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.152 [2024-11-19 21:28:39.918156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.152 21:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:39:06.152 [2024-11-19 21:28:39.924868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.152 [2024-11-19 21:28:39.924906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.152 [2024-11-19 21:28:39.924961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.152 [2024-11-19 21:28:39.924985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.152 [2024-11-19 21:28:39.925010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.152 [2024-11-19 21:28:39.925049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.152 [2024-11-19 21:28:39.925083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.152 [2024-11-19 21:28:39.925113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.152 [2024-11-19 21:28:39.925139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.152 [2024-11-19 21:28:39.925160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.152 [2024-11-19 21:28:39.925184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.152 [2024-11-19 21:28:39.925206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.152 [2024-11-19 21:28:39.925230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.152 [2024-11-19 21:28:39.925251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.152 [2024-11-19 21:28:39.925275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.152 [2024-11-19 21:28:39.925295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.152 [2024-11-19 21:28:39.925318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.152 [2024-11-19 21:28:39.925339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.152 [2024-11-19 21:28:39.925372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.152 [2024-11-19 21:28:39.925393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.152 [2024-11-19 21:28:39.925416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.925438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.925462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.925483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.925506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.925527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.925550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.925572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.925596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.925617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.925640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.925662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.925690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.925713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.925737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.925759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.925782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.925803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.925827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.925849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.925872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.925894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.925917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.925939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.925963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.925984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.926028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.926091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.926138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.926183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.926228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.926277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.926324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.926374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.926419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.926464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.926509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.926554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.926598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.926643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.926688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.926733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.926778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.926824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.926873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.926920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.926964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.926987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.927008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.927032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.927064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.927096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.927118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.927142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.927164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.927187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.927208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.153 [2024-11-19 21:28:39.927232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.153 [2024-11-19 21:28:39.927253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.154 [2024-11-19 21:28:39.927277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.154 [2024-11-19 21:28:39.927298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.154 [2024-11-19 21:28:39.927321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.154 [2024-11-19 21:28:39.927342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.154 [2024-11-19 21:28:39.927371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.154 [2024-11-19 21:28:39.927393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.154 [2024-11-19 21:28:39.927417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.154 [2024-11-19 21:28:39.927445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.154 [2024-11-19 21:28:39.927470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.154 [2024-11-19 21:28:39.927492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.154 [2024-11-19 21:28:39.927515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.154 [2024-11-19 21:28:39.927537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.154 [2024-11-19 21:28:39.927560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.154 [2024-11-19 21:28:39.927582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.154 [2024-11-19 21:28:39.927605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.154 [2024-11-19 21:28:39.927627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.154 [2024-11-19 21:28:39.927651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.154 [2024-11-19 21:28:39.927672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.154 [2024-11-19 21:28:39.927696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.154 [2024-11-19 21:28:39.927717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.154 [2024-11-19 21:28:39.927740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.154 [2024-11-19 21:28:39.927762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.154 [2024-11-19 21:28:39.927786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.154 [2024-11-19 21:28:39.927807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.154 [2024-11-19 21:28:39.927830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.154 [2024-11-19 21:28:39.927851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.154 [2024-11-19 21:28:39.927875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:06.154 [2024-11-19 21:28:39.927896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.154 [2024-11-19 21:28:39.928244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:39:06.154 [2024-11-19 21:28:39.929470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:39:06.154 task offset: 32768 on job bdev=Nvme0n1 fails 00:39:06.154 00:39:06.154 Latency(us) 00:39:06.154 [2024-11-19T20:28:39.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:06.154 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:06.154 Job: Nvme0n1 ended in about 0.21 seconds with error 00:39:06.154 Verification LBA range: start 0x0 length 0x400 00:39:06.154 Nvme0n1 : 0.21 1212.27 75.77 303.07 0.00 40075.57 3786.52 41360.50 00:39:06.154 [2024-11-19T20:28:39.949Z] =================================================================================================================== 00:39:06.154 [2024-11-19T20:28:39.949Z] Total : 1212.27 75.77 303.07 0.00 40075.57 3786.52 41360.50 00:39:06.154 [2024-11-19 21:28:39.934296] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:06.454 [2024-11-19 21:28:40.066446] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:39:07.412 21:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3187926 00:39:07.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3187926) - No such process 00:39:07.412 21:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:39:07.412 21:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:39:07.412 21:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:39:07.412 21:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:39:07.412 21:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:39:07.412 21:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:39:07.412 21:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:07.412 21:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:07.412 { 00:39:07.412 "params": { 00:39:07.412 "name": "Nvme$subsystem", 00:39:07.412 "trtype": "$TEST_TRANSPORT", 00:39:07.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:07.412 "adrfam": "ipv4", 00:39:07.412 "trsvcid": "$NVMF_PORT", 00:39:07.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:07.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:07.412 "hdgst": ${hdgst:-false}, 00:39:07.412 "ddgst": ${ddgst:-false} 00:39:07.412 }, 00:39:07.412 "method": "bdev_nvme_attach_controller" 00:39:07.412 } 00:39:07.412 EOF 00:39:07.412 )") 00:39:07.412 21:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:39:07.412 21:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:39:07.412 21:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:39:07.412 21:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:07.412 "params": { 00:39:07.412 "name": "Nvme0", 00:39:07.412 "trtype": "tcp", 00:39:07.412 "traddr": "10.0.0.2", 00:39:07.412 "adrfam": "ipv4", 00:39:07.412 "trsvcid": "4420", 00:39:07.412 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:07.412 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:07.412 "hdgst": false, 00:39:07.412 "ddgst": false 00:39:07.412 }, 00:39:07.412 "method": "bdev_nvme_attach_controller" 00:39:07.412 }' 00:39:07.412 [2024-11-19 21:28:41.006924] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:39:07.412 [2024-11-19 21:28:41.007060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3188205 ] 00:39:07.412 [2024-11-19 21:28:41.141731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:07.670 [2024-11-19 21:28:41.272116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:08.236 Running I/O for 1 seconds... 00:39:09.169 1390.00 IOPS, 86.88 MiB/s 00:39:09.169 Latency(us) 00:39:09.169 [2024-11-19T20:28:42.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:09.169 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:09.169 Verification LBA range: start 0x0 length 0x400 00:39:09.169 Nvme0n1 : 1.03 1408.63 88.04 0.00 0.00 44442.09 5364.24 39807.05 00:39:09.169 [2024-11-19T20:28:42.964Z] =================================================================================================================== 00:39:09.169 [2024-11-19T20:28:42.964Z] Total : 1408.63 88.04 0.00 0.00 44442.09 5364.24 39807.05 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:10.103 rmmod nvme_tcp 00:39:10.103 rmmod nvme_fabrics 00:39:10.103 rmmod nvme_keyring 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3187755 ']' 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3187755 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3187755 ']' 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3187755 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3187755 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3187755' 00:39:10.103 killing process with pid 3187755 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3187755 00:39:10.103 21:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3187755 00:39:11.478 [2024-11-19 21:28:45.114630] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:39:11.478 21:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:11.478 21:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:11.478 21:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:11.478 21:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:39:11.478 21:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:39:11.478 21:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:39:11.478 21:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:11.478 21:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:11.478 21:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:11.478 21:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:11.478 21:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:11.478 21:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:39:14.012 00:39:14.012 real 0m12.093s 00:39:14.012 user 0m26.266s 00:39:14.012 sys 0m4.724s 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:14.012 ************************************ 00:39:14.012 END TEST nvmf_host_management 00:39:14.012 ************************************ 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:14.012 ************************************ 00:39:14.012 START TEST nvmf_lvol 00:39:14.012 ************************************ 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:14.012 * Looking for test storage... 00:39:14.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:14.012 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:14.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.012 --rc genhtml_branch_coverage=1 00:39:14.012 --rc genhtml_function_coverage=1 00:39:14.012 --rc genhtml_legend=1 00:39:14.013 --rc geninfo_all_blocks=1 00:39:14.013 --rc geninfo_unexecuted_blocks=1 00:39:14.013 00:39:14.013 ' 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:14.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.013 --rc genhtml_branch_coverage=1 00:39:14.013 --rc genhtml_function_coverage=1 00:39:14.013 --rc genhtml_legend=1 00:39:14.013 --rc geninfo_all_blocks=1 00:39:14.013 --rc geninfo_unexecuted_blocks=1 00:39:14.013 00:39:14.013 ' 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:14.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.013 --rc genhtml_branch_coverage=1 00:39:14.013 --rc genhtml_function_coverage=1 00:39:14.013 --rc genhtml_legend=1 00:39:14.013 --rc geninfo_all_blocks=1 00:39:14.013 --rc geninfo_unexecuted_blocks=1 00:39:14.013 00:39:14.013 ' 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:14.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.013 --rc genhtml_branch_coverage=1 00:39:14.013 --rc genhtml_function_coverage=1 00:39:14.013 --rc genhtml_legend=1 00:39:14.013 --rc geninfo_all_blocks=1 00:39:14.013 --rc geninfo_unexecuted_blocks=1 00:39:14.013 00:39:14.013 ' 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:14.013 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:14.014 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:14.014 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:14.014 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:14.014 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:14.014 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:39:14.014 21:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:15.916 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:15.916 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:15.916 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:15.916 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:15.916 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:15.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:15.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:39:15.917 00:39:15.917 --- 10.0.0.2 ping statistics --- 00:39:15.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:15.917 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:15.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:15.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:39:15.917 00:39:15.917 --- 10.0.0.1 ping statistics --- 00:39:15.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:15.917 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3190574 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3190574 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3190574 ']' 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:15.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:15.917 21:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:16.174 [2024-11-19 21:28:49.745951] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:16.174 [2024-11-19 21:28:49.748601] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:39:16.175 [2024-11-19 21:28:49.748700] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:16.175 [2024-11-19 21:28:49.892588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:16.433 [2024-11-19 21:28:50.034507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:16.433 [2024-11-19 21:28:50.034593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:16.433 [2024-11-19 21:28:50.034623] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:16.433 [2024-11-19 21:28:50.034651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:16.433 [2024-11-19 21:28:50.034675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:16.433 [2024-11-19 21:28:50.037303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:16.433 [2024-11-19 21:28:50.037368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:16.433 [2024-11-19 21:28:50.037378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:16.692 [2024-11-19 21:28:50.413251] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:16.692 [2024-11-19 21:28:50.414399] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:16.692 [2024-11-19 21:28:50.415192] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:16.692 [2024-11-19 21:28:50.415536] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:16.950 21:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:16.950 21:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:39:16.950 21:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:16.950 21:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:16.950 21:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:16.950 21:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:16.950 21:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:17.516 [2024-11-19 21:28:51.002547] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:17.516 21:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:17.774 21:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:39:17.774 21:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:18.032 21:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:39:18.032 21:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:39:18.291 21:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:39:18.549 21:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=894d04f5-ffa7-4293-be47-3d3bab45eb78 00:39:18.549 21:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 894d04f5-ffa7-4293-be47-3d3bab45eb78 lvol 20 00:39:18.807 21:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7e70c54a-5053-4d0b-9a04-a37a464038f0 00:39:18.807 21:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:19.372 21:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7e70c54a-5053-4d0b-9a04-a37a464038f0 00:39:19.372 21:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:19.630 [2024-11-19 21:28:53.398727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:19.630 21:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:20.196 21:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3191098 00:39:20.196 21:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:39:20.196 21:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:39:21.130 21:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7e70c54a-5053-4d0b-9a04-a37a464038f0 MY_SNAPSHOT 00:39:21.388 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=95a88c84-3f5f-4afd-ac64-583a7f8fe009 00:39:21.388 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7e70c54a-5053-4d0b-9a04-a37a464038f0 30 00:39:21.648 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 95a88c84-3f5f-4afd-ac64-583a7f8fe009 MY_CLONE 00:39:21.906 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7d0c16a8-3b06-4d61-a24c-308d713c1b62 00:39:21.906 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 7d0c16a8-3b06-4d61-a24c-308d713c1b62 00:39:22.840 21:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3191098 00:39:30.950 Initializing NVMe Controllers 00:39:30.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:30.950 Controller IO queue size 128, less than required. 00:39:30.950 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:30.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:39:30.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:39:30.950 Initialization complete. Launching workers. 00:39:30.950 ======================================================== 00:39:30.950 Latency(us) 00:39:30.950 Device Information : IOPS MiB/s Average min max 00:39:30.950 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8161.50 31.88 15702.22 554.58 191075.11 00:39:30.950 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8194.50 32.01 15633.87 3087.52 148850.11 00:39:30.950 ======================================================== 00:39:30.950 Total : 16356.00 63.89 15667.98 554.58 191075.11 00:39:30.950 00:39:30.950 21:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:30.950 21:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7e70c54a-5053-4d0b-9a04-a37a464038f0 00:39:31.208 21:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 894d04f5-ffa7-4293-be47-3d3bab45eb78 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:31.467 rmmod nvme_tcp 00:39:31.467 rmmod nvme_fabrics 00:39:31.467 rmmod nvme_keyring 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3190574 ']' 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3190574 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3190574 ']' 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3190574 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3190574 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3190574' 00:39:31.467 killing process with pid 3190574 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3190574 00:39:31.467 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3190574 00:39:32.843 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:32.843 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:32.843 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:32.843 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:39:32.843 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:39:32.843 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:32.843 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:39:32.843 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:32.843 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:32.843 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:32.843 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:32.843 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:35.373 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:35.373 00:39:35.373 real 0m21.362s 00:39:35.373 user 0m58.738s 00:39:35.373 sys 0m7.759s 00:39:35.373 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:35.373 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:35.373 ************************************ 00:39:35.373 END TEST nvmf_lvol 00:39:35.373 ************************************ 00:39:35.373 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:35.374 ************************************ 00:39:35.374 START TEST nvmf_lvs_grow 00:39:35.374 ************************************ 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:35.374 * Looking for test storage... 00:39:35.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:35.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:35.374 --rc genhtml_branch_coverage=1 00:39:35.374 --rc genhtml_function_coverage=1 00:39:35.374 --rc genhtml_legend=1 00:39:35.374 --rc geninfo_all_blocks=1 00:39:35.374 --rc geninfo_unexecuted_blocks=1 00:39:35.374 00:39:35.374 ' 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:35.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:35.374 --rc genhtml_branch_coverage=1 00:39:35.374 --rc genhtml_function_coverage=1 00:39:35.374 --rc genhtml_legend=1 00:39:35.374 --rc geninfo_all_blocks=1 00:39:35.374 --rc geninfo_unexecuted_blocks=1 00:39:35.374 00:39:35.374 ' 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:35.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:35.374 --rc genhtml_branch_coverage=1 00:39:35.374 --rc genhtml_function_coverage=1 00:39:35.374 --rc genhtml_legend=1 00:39:35.374 --rc geninfo_all_blocks=1 00:39:35.374 --rc geninfo_unexecuted_blocks=1 00:39:35.374 00:39:35.374 ' 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:35.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:35.374 --rc genhtml_branch_coverage=1 00:39:35.374 --rc genhtml_function_coverage=1 00:39:35.374 --rc genhtml_legend=1 00:39:35.374 --rc geninfo_all_blocks=1 00:39:35.374 --rc geninfo_unexecuted_blocks=1 00:39:35.374 00:39:35.374 ' 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.374 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:39:35.375 21:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:37.278 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:37.278 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:37.278 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:37.279 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:37.279 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:37.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:37.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:39:37.279 00:39:37.279 --- 10.0.0.2 ping statistics --- 00:39:37.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:37.279 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:37.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:37.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:39:37.279 00:39:37.279 --- 10.0.0.1 ping statistics --- 00:39:37.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:37.279 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3194479 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3194479 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3194479 ']' 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:37.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:37.279 21:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:37.279 [2024-11-19 21:29:11.002277] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:37.279 [2024-11-19 21:29:11.004877] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:39:37.279 [2024-11-19 21:29:11.004991] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:37.538 [2024-11-19 21:29:11.161374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:37.538 [2024-11-19 21:29:11.301218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:37.538 [2024-11-19 21:29:11.301290] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:37.538 [2024-11-19 21:29:11.301316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:37.538 [2024-11-19 21:29:11.301336] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:37.538 [2024-11-19 21:29:11.301383] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:37.538 [2024-11-19 21:29:11.302971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:38.106 [2024-11-19 21:29:11.665582] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:38.106 [2024-11-19 21:29:11.666021] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:38.365 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:38.365 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:39:38.365 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:38.365 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:38.365 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:38.365 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:38.365 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:38.623 [2024-11-19 21:29:12.280176] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:38.623 21:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:39:38.623 21:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:38.623 21:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:38.623 21:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:38.623 ************************************ 00:39:38.623 START TEST lvs_grow_clean 00:39:38.623 ************************************ 00:39:38.623 21:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:39:38.623 21:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:38.623 21:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:38.624 21:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:38.624 21:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:38.624 21:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:38.624 21:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:38.624 21:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:38.624 21:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:38.624 21:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:38.906 21:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:38.906 21:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:39.185 21:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=32e652ae-553d-42c2-b869-9b6c615175e7 00:39:39.185 21:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32e652ae-553d-42c2-b869-9b6c615175e7 00:39:39.185 21:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:39.443 21:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:39.443 21:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:39.443 21:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 32e652ae-553d-42c2-b869-9b6c615175e7 lvol 150 00:39:39.702 21:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e4510f1d-5b88-40eb-85e8-3fe0c9438f72 00:39:39.702 21:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:39.702 21:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:39.967 [2024-11-19 21:29:13.731922] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:39.967 [2024-11-19 21:29:13.732066] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:39.967 true 00:39:39.967 21:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32e652ae-553d-42c2-b869-9b6c615175e7 00:39:39.967 21:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:40.534 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:40.534 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:40.534 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e4510f1d-5b88-40eb-85e8-3fe0c9438f72 00:39:40.792 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:41.049 [2024-11-19 21:29:14.820373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:41.049 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:41.614 21:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3195044 00:39:41.614 21:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:41.614 21:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:41.614 21:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3195044 /var/tmp/bdevperf.sock 00:39:41.615 21:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3195044 ']' 00:39:41.615 21:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:41.615 21:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:41.615 21:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:41.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:41.615 21:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:41.615 21:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:41.615 [2024-11-19 21:29:15.197034] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:39:41.615 [2024-11-19 21:29:15.197206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3195044 ] 00:39:41.615 [2024-11-19 21:29:15.338254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:41.873 [2024-11-19 21:29:15.471236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:42.438 21:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:42.438 21:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:39:42.438 21:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:43.005 Nvme0n1 00:39:43.005 21:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:43.263 [ 00:39:43.263 { 00:39:43.263 "name": "Nvme0n1", 00:39:43.263 "aliases": [ 00:39:43.263 "e4510f1d-5b88-40eb-85e8-3fe0c9438f72" 00:39:43.263 ], 00:39:43.263 "product_name": "NVMe disk", 00:39:43.263 "block_size": 4096, 00:39:43.263 "num_blocks": 38912, 00:39:43.263 "uuid": "e4510f1d-5b88-40eb-85e8-3fe0c9438f72", 00:39:43.263 "numa_id": 0, 00:39:43.263 "assigned_rate_limits": { 00:39:43.263 "rw_ios_per_sec": 0, 00:39:43.263 "rw_mbytes_per_sec": 0, 00:39:43.263 "r_mbytes_per_sec": 0, 00:39:43.263 "w_mbytes_per_sec": 0 00:39:43.263 }, 00:39:43.263 "claimed": false, 00:39:43.263 "zoned": false, 00:39:43.263 "supported_io_types": { 00:39:43.263 "read": true, 00:39:43.263 "write": true, 00:39:43.263 "unmap": true, 00:39:43.263 "flush": true, 00:39:43.263 "reset": true, 00:39:43.263 "nvme_admin": true, 00:39:43.263 "nvme_io": true, 00:39:43.263 "nvme_io_md": false, 00:39:43.263 "write_zeroes": true, 00:39:43.263 "zcopy": false, 00:39:43.263 "get_zone_info": false, 00:39:43.263 "zone_management": false, 00:39:43.263 "zone_append": false, 00:39:43.263 "compare": true, 00:39:43.263 "compare_and_write": true, 00:39:43.263 "abort": true, 00:39:43.263 "seek_hole": false, 00:39:43.263 "seek_data": false, 00:39:43.263 "copy": true, 00:39:43.263 "nvme_iov_md": false 00:39:43.263 }, 00:39:43.263 "memory_domains": [ 00:39:43.263 { 00:39:43.263 "dma_device_id": "system", 00:39:43.263 "dma_device_type": 1 00:39:43.263 } 00:39:43.263 ], 00:39:43.263 "driver_specific": { 00:39:43.263 "nvme": [ 00:39:43.263 { 00:39:43.263 "trid": { 00:39:43.263 "trtype": "TCP", 00:39:43.263 "adrfam": "IPv4", 00:39:43.263 "traddr": "10.0.0.2", 00:39:43.263 "trsvcid": "4420", 00:39:43.263 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:43.263 }, 00:39:43.263 "ctrlr_data": { 00:39:43.263 "cntlid": 1, 00:39:43.263 "vendor_id": "0x8086", 00:39:43.263 "model_number": "SPDK bdev Controller", 00:39:43.263 "serial_number": "SPDK0", 00:39:43.263 "firmware_revision": "25.01", 00:39:43.263 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:43.263 "oacs": { 00:39:43.263 "security": 0, 00:39:43.263 "format": 0, 00:39:43.263 "firmware": 0, 00:39:43.263 "ns_manage": 0 00:39:43.263 }, 00:39:43.263 "multi_ctrlr": true, 00:39:43.263 "ana_reporting": false 00:39:43.263 }, 00:39:43.263 "vs": { 00:39:43.263 "nvme_version": "1.3" 00:39:43.263 }, 00:39:43.263 "ns_data": { 00:39:43.263 "id": 1, 00:39:43.263 "can_share": true 00:39:43.263 } 00:39:43.263 } 00:39:43.263 ], 00:39:43.263 "mp_policy": "active_passive" 00:39:43.263 } 00:39:43.263 } 00:39:43.263 ] 00:39:43.263 21:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3195222 00:39:43.263 21:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:43.263 21:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:43.263 Running I/O for 10 seconds... 00:39:44.637 Latency(us) 00:39:44.637 [2024-11-19T20:29:18.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:44.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:44.637 Nvme0n1 : 1.00 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:39:44.637 [2024-11-19T20:29:18.432Z] =================================================================================================================== 00:39:44.637 [2024-11-19T20:29:18.432Z] Total : 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:39:44.637 00:39:45.201 21:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 32e652ae-553d-42c2-b869-9b6c615175e7 00:39:45.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:45.459 Nvme0n1 : 2.00 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:39:45.459 [2024-11-19T20:29:19.254Z] =================================================================================================================== 00:39:45.459 [2024-11-19T20:29:19.254Z] Total : 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:39:45.459 00:39:45.459 true 00:39:45.459 21:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32e652ae-553d-42c2-b869-9b6c615175e7 00:39:45.459 21:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:46.024 21:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:46.024 21:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:46.024 21:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3195222 00:39:46.282 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:46.282 Nvme0n1 : 3.00 10625.67 41.51 0.00 0.00 0.00 0.00 0.00 00:39:46.282 [2024-11-19T20:29:20.077Z] =================================================================================================================== 00:39:46.282 [2024-11-19T20:29:20.077Z] Total : 10625.67 41.51 0.00 0.00 0.00 0.00 0.00 00:39:46.282 00:39:47.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:47.657 Nvme0n1 : 4.00 10636.25 41.55 0.00 0.00 0.00 0.00 0.00 00:39:47.657 [2024-11-19T20:29:21.452Z] =================================================================================================================== 00:39:47.657 [2024-11-19T20:29:21.452Z] Total : 10636.25 41.55 0.00 0.00 0.00 0.00 0.00 00:39:47.657 00:39:48.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:48.591 Nvme0n1 : 5.00 10668.00 41.67 0.00 0.00 0.00 0.00 0.00 00:39:48.591 [2024-11-19T20:29:22.386Z] =================================================================================================================== 00:39:48.591 [2024-11-19T20:29:22.386Z] Total : 10668.00 41.67 0.00 0.00 0.00 0.00 0.00 00:39:48.591 00:39:49.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:49.525 Nvme0n1 : 6.00 10689.17 41.75 0.00 0.00 0.00 0.00 0.00 00:39:49.525 [2024-11-19T20:29:23.320Z] =================================================================================================================== 00:39:49.525 [2024-11-19T20:29:23.320Z] Total : 10689.17 41.75 0.00 0.00 0.00 0.00 0.00 00:39:49.525 00:39:50.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:50.459 Nvme0n1 : 7.00 10704.29 41.81 0.00 0.00 0.00 0.00 0.00 00:39:50.459 [2024-11-19T20:29:24.254Z] =================================================================================================================== 00:39:50.459 [2024-11-19T20:29:24.254Z] Total : 10704.29 41.81 0.00 0.00 0.00 0.00 0.00 00:39:50.459 00:39:51.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:51.393 Nvme0n1 : 8.00 10715.62 41.86 0.00 0.00 0.00 0.00 0.00 00:39:51.393 [2024-11-19T20:29:25.188Z] =================================================================================================================== 00:39:51.393 [2024-11-19T20:29:25.188Z] Total : 10715.62 41.86 0.00 0.00 0.00 0.00 0.00 00:39:51.393 00:39:52.325 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:52.325 Nvme0n1 : 9.00 10738.56 41.95 0.00 0.00 0.00 0.00 0.00 00:39:52.325 [2024-11-19T20:29:26.120Z] =================================================================================================================== 00:39:52.325 [2024-11-19T20:29:26.120Z] Total : 10738.56 41.95 0.00 0.00 0.00 0.00 0.00 00:39:52.325 00:39:53.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:53.699 Nvme0n1 : 10.00 10744.20 41.97 0.00 0.00 0.00 0.00 0.00 00:39:53.699 [2024-11-19T20:29:27.494Z] =================================================================================================================== 00:39:53.699 [2024-11-19T20:29:27.494Z] Total : 10744.20 41.97 0.00 0.00 0.00 0.00 0.00 00:39:53.699 00:39:53.699 00:39:53.699 Latency(us) 00:39:53.699 [2024-11-19T20:29:27.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:53.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:53.699 Nvme0n1 : 10.01 10749.56 41.99 0.00 0.00 11900.80 10631.40 25437.68 00:39:53.699 [2024-11-19T20:29:27.494Z] =================================================================================================================== 00:39:53.699 [2024-11-19T20:29:27.494Z] Total : 10749.56 41.99 0.00 0.00 11900.80 10631.40 25437.68 00:39:53.699 { 00:39:53.699 "results": [ 00:39:53.699 { 00:39:53.699 "job": "Nvme0n1", 00:39:53.699 "core_mask": "0x2", 00:39:53.699 "workload": "randwrite", 00:39:53.699 "status": "finished", 00:39:53.699 "queue_depth": 128, 00:39:53.699 "io_size": 4096, 00:39:53.699 "runtime": 10.00692, 00:39:53.699 "iops": 10749.561303577924, 00:39:53.699 "mibps": 41.99047384210127, 00:39:53.699 "io_failed": 0, 00:39:53.699 "io_timeout": 0, 00:39:53.699 "avg_latency_us": 11900.8027319747, 00:39:53.699 "min_latency_us": 10631.395555555555, 00:39:53.699 "max_latency_us": 25437.677037037036 00:39:53.699 } 00:39:53.699 ], 00:39:53.699 "core_count": 1 00:39:53.699 } 00:39:53.699 21:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3195044 00:39:53.699 21:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3195044 ']' 00:39:53.699 21:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3195044 00:39:53.700 21:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:39:53.700 21:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:53.700 21:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3195044 00:39:53.700 21:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:53.700 21:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:53.700 21:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3195044' 00:39:53.700 killing process with pid 3195044 00:39:53.700 21:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3195044 00:39:53.700 Received shutdown signal, test time was about 10.000000 seconds 00:39:53.700 00:39:53.700 Latency(us) 00:39:53.700 [2024-11-19T20:29:27.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:53.700 [2024-11-19T20:29:27.495Z] =================================================================================================================== 00:39:53.700 [2024-11-19T20:29:27.495Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:53.700 21:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3195044 00:39:54.265 21:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:54.524 21:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:54.782 21:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32e652ae-553d-42c2-b869-9b6c615175e7 00:39:54.782 21:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:55.348 21:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:55.348 21:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:39:55.348 21:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:55.348 [2024-11-19 21:29:29.088031] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:55.348 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32e652ae-553d-42c2-b869-9b6c615175e7 00:39:55.348 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:39:55.348 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32e652ae-553d-42c2-b869-9b6c615175e7 00:39:55.348 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:55.348 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:55.348 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:55.348 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:55.348 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:55.348 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:55.348 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:55.348 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:55.348 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32e652ae-553d-42c2-b869-9b6c615175e7 00:39:55.606 request: 00:39:55.606 { 00:39:55.606 "uuid": "32e652ae-553d-42c2-b869-9b6c615175e7", 00:39:55.606 "method": "bdev_lvol_get_lvstores", 00:39:55.606 "req_id": 1 00:39:55.606 } 00:39:55.606 Got JSON-RPC error response 00:39:55.606 response: 00:39:55.606 { 00:39:55.606 "code": -19, 00:39:55.606 "message": "No such device" 00:39:55.606 } 00:39:55.606 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:39:55.606 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:55.606 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:55.606 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:55.606 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:56.170 aio_bdev 00:39:56.170 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e4510f1d-5b88-40eb-85e8-3fe0c9438f72 00:39:56.170 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=e4510f1d-5b88-40eb-85e8-3fe0c9438f72 00:39:56.170 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:56.170 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:39:56.170 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:56.170 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:56.170 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:56.170 21:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e4510f1d-5b88-40eb-85e8-3fe0c9438f72 -t 2000 00:39:56.737 [ 00:39:56.737 { 00:39:56.737 "name": "e4510f1d-5b88-40eb-85e8-3fe0c9438f72", 00:39:56.737 "aliases": [ 00:39:56.737 "lvs/lvol" 00:39:56.737 ], 00:39:56.737 "product_name": "Logical Volume", 00:39:56.737 "block_size": 4096, 00:39:56.737 "num_blocks": 38912, 00:39:56.737 "uuid": "e4510f1d-5b88-40eb-85e8-3fe0c9438f72", 00:39:56.737 "assigned_rate_limits": { 00:39:56.737 "rw_ios_per_sec": 0, 00:39:56.737 "rw_mbytes_per_sec": 0, 00:39:56.737 "r_mbytes_per_sec": 0, 00:39:56.737 "w_mbytes_per_sec": 0 00:39:56.738 }, 00:39:56.738 "claimed": false, 00:39:56.738 "zoned": false, 00:39:56.738 "supported_io_types": { 00:39:56.738 "read": true, 00:39:56.738 "write": true, 00:39:56.738 "unmap": true, 00:39:56.738 "flush": false, 00:39:56.738 "reset": true, 00:39:56.738 "nvme_admin": false, 00:39:56.738 "nvme_io": false, 00:39:56.738 "nvme_io_md": false, 00:39:56.738 "write_zeroes": true, 00:39:56.738 "zcopy": false, 00:39:56.738 "get_zone_info": false, 00:39:56.738 "zone_management": false, 00:39:56.738 "zone_append": false, 00:39:56.738 "compare": false, 00:39:56.738 "compare_and_write": false, 00:39:56.738 "abort": false, 00:39:56.738 "seek_hole": true, 00:39:56.738 "seek_data": true, 00:39:56.738 "copy": false, 00:39:56.738 "nvme_iov_md": false 00:39:56.738 }, 00:39:56.738 "driver_specific": { 00:39:56.738 "lvol": { 00:39:56.738 "lvol_store_uuid": "32e652ae-553d-42c2-b869-9b6c615175e7", 00:39:56.738 "base_bdev": "aio_bdev", 00:39:56.738 "thin_provision": false, 00:39:56.738 "num_allocated_clusters": 38, 00:39:56.738 "snapshot": false, 00:39:56.738 "clone": false, 00:39:56.738 "esnap_clone": false 00:39:56.738 } 00:39:56.738 } 00:39:56.738 } 00:39:56.738 ] 00:39:56.738 21:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:39:56.738 21:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32e652ae-553d-42c2-b869-9b6c615175e7 00:39:56.738 21:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:56.996 21:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:56.996 21:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32e652ae-553d-42c2-b869-9b6c615175e7 00:39:56.996 21:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:57.254 21:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:57.254 21:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e4510f1d-5b88-40eb-85e8-3fe0c9438f72 00:39:57.513 21:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 32e652ae-553d-42c2-b869-9b6c615175e7 00:39:57.771 21:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:58.030 21:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:58.030 00:39:58.030 real 0m19.350s 00:39:58.030 user 0m19.091s 00:39:58.030 sys 0m1.901s 00:39:58.030 21:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:58.030 21:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:58.030 ************************************ 00:39:58.030 END TEST lvs_grow_clean 00:39:58.030 ************************************ 00:39:58.030 21:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:39:58.030 21:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:58.030 21:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:58.030 21:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:58.030 ************************************ 00:39:58.030 START TEST lvs_grow_dirty 00:39:58.030 ************************************ 00:39:58.030 21:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:39:58.030 21:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:58.030 21:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:58.030 21:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:58.030 21:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:58.030 21:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:58.030 21:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:58.030 21:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:58.030 21:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:58.030 21:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:58.288 21:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:58.288 21:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:58.546 21:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2aa11762-7075-4a70-b837-fbac4b4316e8 00:39:58.546 21:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2aa11762-7075-4a70-b837-fbac4b4316e8 00:39:58.546 21:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:58.805 21:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:58.805 21:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:58.805 21:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2aa11762-7075-4a70-b837-fbac4b4316e8 lvol 150 00:39:59.063 21:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d30c3580-7bec-4164-849a-9b02bd2d0fab 00:39:59.063 21:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:59.063 21:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:59.321 [2024-11-19 21:29:33.095882] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:59.321 [2024-11-19 21:29:33.096016] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:59.321 true 00:39:59.321 21:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2aa11762-7075-4a70-b837-fbac4b4316e8 00:39:59.321 21:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:59.887 21:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:59.887 21:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:59.887 21:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d30c3580-7bec-4164-849a-9b02bd2d0fab 00:40:00.145 21:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:00.403 [2024-11-19 21:29:34.196379] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:00.661 21:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:00.919 21:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3197336 00:40:00.919 21:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:00.919 21:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:00.919 21:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3197336 /var/tmp/bdevperf.sock 00:40:00.919 21:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3197336 ']' 00:40:00.919 21:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:00.919 21:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:00.919 21:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:00.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:00.919 21:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:00.919 21:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:00.919 [2024-11-19 21:29:34.568168] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:40:00.919 [2024-11-19 21:29:34.568318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3197336 ] 00:40:00.919 [2024-11-19 21:29:34.712114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:01.177 [2024-11-19 21:29:34.847634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:01.743 21:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:01.743 21:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:40:01.743 21:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:02.309 Nvme0n1 00:40:02.309 21:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:02.567 [ 00:40:02.567 { 00:40:02.567 "name": "Nvme0n1", 00:40:02.567 "aliases": [ 00:40:02.567 "d30c3580-7bec-4164-849a-9b02bd2d0fab" 00:40:02.567 ], 00:40:02.567 "product_name": "NVMe disk", 00:40:02.567 "block_size": 4096, 00:40:02.567 "num_blocks": 38912, 00:40:02.567 "uuid": "d30c3580-7bec-4164-849a-9b02bd2d0fab", 00:40:02.567 "numa_id": 0, 00:40:02.567 "assigned_rate_limits": { 00:40:02.567 "rw_ios_per_sec": 0, 00:40:02.567 "rw_mbytes_per_sec": 0, 00:40:02.567 "r_mbytes_per_sec": 0, 00:40:02.567 "w_mbytes_per_sec": 0 00:40:02.567 }, 00:40:02.567 "claimed": false, 00:40:02.567 "zoned": false, 00:40:02.567 "supported_io_types": { 00:40:02.567 "read": true, 00:40:02.567 "write": true, 00:40:02.567 "unmap": true, 00:40:02.567 "flush": true, 00:40:02.567 "reset": true, 00:40:02.567 "nvme_admin": true, 00:40:02.567 "nvme_io": true, 00:40:02.567 "nvme_io_md": false, 00:40:02.567 "write_zeroes": true, 00:40:02.567 "zcopy": false, 00:40:02.567 "get_zone_info": false, 00:40:02.567 "zone_management": false, 00:40:02.567 "zone_append": false, 00:40:02.567 "compare": true, 00:40:02.567 "compare_and_write": true, 00:40:02.567 "abort": true, 00:40:02.567 "seek_hole": false, 00:40:02.567 "seek_data": false, 00:40:02.567 "copy": true, 00:40:02.567 "nvme_iov_md": false 00:40:02.567 }, 00:40:02.567 "memory_domains": [ 00:40:02.567 { 00:40:02.567 "dma_device_id": "system", 00:40:02.567 "dma_device_type": 1 00:40:02.567 } 00:40:02.567 ], 00:40:02.567 "driver_specific": { 00:40:02.567 "nvme": [ 00:40:02.567 { 00:40:02.567 "trid": { 00:40:02.567 "trtype": "TCP", 00:40:02.567 "adrfam": "IPv4", 00:40:02.567 "traddr": "10.0.0.2", 00:40:02.567 "trsvcid": "4420", 00:40:02.567 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:02.567 }, 00:40:02.567 "ctrlr_data": { 00:40:02.567 "cntlid": 1, 00:40:02.567 "vendor_id": "0x8086", 00:40:02.567 "model_number": "SPDK bdev Controller", 00:40:02.567 "serial_number": "SPDK0", 00:40:02.567 "firmware_revision": "25.01", 00:40:02.567 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:02.567 "oacs": { 00:40:02.567 "security": 0, 00:40:02.567 "format": 0, 00:40:02.567 "firmware": 0, 00:40:02.567 "ns_manage": 0 00:40:02.567 }, 00:40:02.567 "multi_ctrlr": true, 00:40:02.567 "ana_reporting": false 00:40:02.567 }, 00:40:02.567 "vs": { 00:40:02.567 "nvme_version": "1.3" 00:40:02.567 }, 00:40:02.567 "ns_data": { 00:40:02.567 "id": 1, 00:40:02.567 "can_share": true 00:40:02.567 } 00:40:02.567 } 00:40:02.567 ], 00:40:02.567 "mp_policy": "active_passive" 00:40:02.567 } 00:40:02.567 } 00:40:02.567 ] 00:40:02.567 21:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3197470 00:40:02.567 21:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:02.567 21:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:02.567 Running I/O for 10 seconds... 00:40:03.502 Latency(us) 00:40:03.502 [2024-11-19T20:29:37.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:03.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:03.502 Nvme0n1 : 1.00 10668.00 41.67 0.00 0.00 0.00 0.00 0.00 00:40:03.502 [2024-11-19T20:29:37.297Z] =================================================================================================================== 00:40:03.502 [2024-11-19T20:29:37.297Z] Total : 10668.00 41.67 0.00 0.00 0.00 0.00 0.00 00:40:03.502 00:40:04.436 21:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2aa11762-7075-4a70-b837-fbac4b4316e8 00:40:04.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:04.701 Nvme0n1 : 2.00 10795.00 42.17 0.00 0.00 0.00 0.00 0.00 00:40:04.701 [2024-11-19T20:29:38.496Z] =================================================================================================================== 00:40:04.701 [2024-11-19T20:29:38.496Z] Total : 10795.00 42.17 0.00 0.00 0.00 0.00 0.00 00:40:04.701 00:40:04.701 true 00:40:05.036 21:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2aa11762-7075-4a70-b837-fbac4b4316e8 00:40:05.036 21:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:05.036 21:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:05.036 21:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:05.036 21:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3197470 00:40:05.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:05.630 Nvme0n1 : 3.00 10795.00 42.17 0.00 0.00 0.00 0.00 0.00 00:40:05.630 [2024-11-19T20:29:39.425Z] =================================================================================================================== 00:40:05.630 [2024-11-19T20:29:39.425Z] Total : 10795.00 42.17 0.00 0.00 0.00 0.00 0.00 00:40:05.630 00:40:06.564 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:06.564 Nvme0n1 : 4.00 10826.75 42.29 0.00 0.00 0.00 0.00 0.00 00:40:06.564 [2024-11-19T20:29:40.359Z] =================================================================================================================== 00:40:06.564 [2024-11-19T20:29:40.359Z] Total : 10826.75 42.29 0.00 0.00 0.00 0.00 0.00 00:40:06.564 00:40:07.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:07.499 Nvme0n1 : 5.00 10833.20 42.32 0.00 0.00 0.00 0.00 0.00 00:40:07.499 [2024-11-19T20:29:41.294Z] =================================================================================================================== 00:40:07.499 [2024-11-19T20:29:41.294Z] Total : 10833.20 42.32 0.00 0.00 0.00 0.00 0.00 00:40:07.499 00:40:08.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:08.870 Nvme0n1 : 6.00 10879.67 42.50 0.00 0.00 0.00 0.00 0.00 00:40:08.870 [2024-11-19T20:29:42.665Z] =================================================================================================================== 00:40:08.870 [2024-11-19T20:29:42.665Z] Total : 10879.67 42.50 0.00 0.00 0.00 0.00 0.00 00:40:08.870 00:40:09.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:09.804 Nvme0n1 : 7.00 10903.86 42.59 0.00 0.00 0.00 0.00 0.00 00:40:09.804 [2024-11-19T20:29:43.599Z] =================================================================================================================== 00:40:09.804 [2024-11-19T20:29:43.599Z] Total : 10903.86 42.59 0.00 0.00 0.00 0.00 0.00 00:40:09.804 00:40:10.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:10.737 Nvme0n1 : 8.00 10937.88 42.73 0.00 0.00 0.00 0.00 0.00 00:40:10.737 [2024-11-19T20:29:44.532Z] =================================================================================================================== 00:40:10.737 [2024-11-19T20:29:44.532Z] Total : 10937.88 42.73 0.00 0.00 0.00 0.00 0.00 00:40:10.737 00:40:11.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:11.670 Nvme0n1 : 9.00 10950.22 42.77 0.00 0.00 0.00 0.00 0.00 00:40:11.670 [2024-11-19T20:29:45.465Z] =================================================================================================================== 00:40:11.670 [2024-11-19T20:29:45.465Z] Total : 10950.22 42.77 0.00 0.00 0.00 0.00 0.00 00:40:11.670 00:40:12.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:12.603 Nvme0n1 : 10.00 10972.80 42.86 0.00 0.00 0.00 0.00 0.00 00:40:12.603 [2024-11-19T20:29:46.398Z] =================================================================================================================== 00:40:12.603 [2024-11-19T20:29:46.398Z] Total : 10972.80 42.86 0.00 0.00 0.00 0.00 0.00 00:40:12.603 00:40:12.603 00:40:12.603 Latency(us) 00:40:12.603 [2024-11-19T20:29:46.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:12.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:12.603 Nvme0n1 : 10.01 10974.90 42.87 0.00 0.00 11655.93 9466.31 24563.86 00:40:12.603 [2024-11-19T20:29:46.398Z] =================================================================================================================== 00:40:12.603 [2024-11-19T20:29:46.398Z] Total : 10974.90 42.87 0.00 0.00 11655.93 9466.31 24563.86 00:40:12.603 { 00:40:12.603 "results": [ 00:40:12.603 { 00:40:12.603 "job": "Nvme0n1", 00:40:12.603 "core_mask": "0x2", 00:40:12.603 "workload": "randwrite", 00:40:12.603 "status": "finished", 00:40:12.603 "queue_depth": 128, 00:40:12.603 "io_size": 4096, 00:40:12.603 "runtime": 10.009747, 00:40:12.603 "iops": 10974.902762277608, 00:40:12.603 "mibps": 42.87071391514691, 00:40:12.603 "io_failed": 0, 00:40:12.603 "io_timeout": 0, 00:40:12.603 "avg_latency_us": 11655.934229644732, 00:40:12.603 "min_latency_us": 9466.31111111111, 00:40:12.603 "max_latency_us": 24563.863703703704 00:40:12.603 } 00:40:12.603 ], 00:40:12.603 "core_count": 1 00:40:12.603 } 00:40:12.603 21:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3197336 00:40:12.603 21:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3197336 ']' 00:40:12.604 21:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3197336 00:40:12.604 21:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:40:12.604 21:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:12.604 21:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3197336 00:40:12.604 21:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:12.604 21:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:12.604 21:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3197336' 00:40:12.604 killing process with pid 3197336 00:40:12.604 21:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3197336 00:40:12.604 Received shutdown signal, test time was about 10.000000 seconds 00:40:12.604 00:40:12.604 Latency(us) 00:40:12.604 [2024-11-19T20:29:46.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:12.604 [2024-11-19T20:29:46.399Z] =================================================================================================================== 00:40:12.604 [2024-11-19T20:29:46.399Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:12.604 21:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3197336 00:40:13.538 21:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:13.796 21:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:14.054 21:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2aa11762-7075-4a70-b837-fbac4b4316e8 00:40:14.054 21:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:14.312 21:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:14.312 21:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:40:14.312 21:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3194479 00:40:14.312 21:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3194479 00:40:14.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3194479 Killed "${NVMF_APP[@]}" "$@" 00:40:14.570 21:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:40:14.570 21:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:40:14.570 21:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:14.570 21:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:14.570 21:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:14.570 21:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3198919 00:40:14.570 21:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3198919 00:40:14.570 21:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:14.570 21:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3198919 ']' 00:40:14.570 21:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:14.570 21:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:14.570 21:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:14.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:14.570 21:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:14.570 21:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:14.570 [2024-11-19 21:29:48.208983] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:14.570 [2024-11-19 21:29:48.211552] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:40:14.570 [2024-11-19 21:29:48.211666] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:14.570 [2024-11-19 21:29:48.359635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:14.829 [2024-11-19 21:29:48.491536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:14.829 [2024-11-19 21:29:48.491615] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:14.829 [2024-11-19 21:29:48.491643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:14.829 [2024-11-19 21:29:48.491663] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:14.829 [2024-11-19 21:29:48.491684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:14.829 [2024-11-19 21:29:48.493306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:15.088 [2024-11-19 21:29:48.857820] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:15.088 [2024-11-19 21:29:48.858270] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:15.653 21:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:15.653 21:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:40:15.654 21:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:15.654 21:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:15.654 21:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:15.654 21:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:15.654 21:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:15.912 [2024-11-19 21:29:49.453717] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:40:15.912 [2024-11-19 21:29:49.453954] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:40:15.912 [2024-11-19 21:29:49.454046] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:40:15.912 21:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:40:15.912 21:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d30c3580-7bec-4164-849a-9b02bd2d0fab 00:40:15.912 21:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d30c3580-7bec-4164-849a-9b02bd2d0fab 00:40:15.912 21:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:15.912 21:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:40:15.912 21:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:15.912 21:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:15.912 21:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:16.170 21:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d30c3580-7bec-4164-849a-9b02bd2d0fab -t 2000 00:40:16.427 [ 00:40:16.427 { 00:40:16.427 "name": "d30c3580-7bec-4164-849a-9b02bd2d0fab", 00:40:16.427 "aliases": [ 00:40:16.427 "lvs/lvol" 00:40:16.427 ], 00:40:16.427 "product_name": "Logical Volume", 00:40:16.427 "block_size": 4096, 00:40:16.427 "num_blocks": 38912, 00:40:16.427 "uuid": "d30c3580-7bec-4164-849a-9b02bd2d0fab", 00:40:16.427 "assigned_rate_limits": { 00:40:16.427 "rw_ios_per_sec": 0, 00:40:16.427 "rw_mbytes_per_sec": 0, 00:40:16.427 "r_mbytes_per_sec": 0, 00:40:16.427 "w_mbytes_per_sec": 0 00:40:16.427 }, 00:40:16.427 "claimed": false, 00:40:16.427 "zoned": false, 00:40:16.427 "supported_io_types": { 00:40:16.427 "read": true, 00:40:16.427 "write": true, 00:40:16.427 "unmap": true, 00:40:16.427 "flush": false, 00:40:16.428 "reset": true, 00:40:16.428 "nvme_admin": false, 00:40:16.428 "nvme_io": false, 00:40:16.428 "nvme_io_md": false, 00:40:16.428 "write_zeroes": true, 00:40:16.428 "zcopy": false, 00:40:16.428 "get_zone_info": false, 00:40:16.428 "zone_management": false, 00:40:16.428 "zone_append": false, 00:40:16.428 "compare": false, 00:40:16.428 "compare_and_write": false, 00:40:16.428 "abort": false, 00:40:16.428 "seek_hole": true, 00:40:16.428 "seek_data": true, 00:40:16.428 "copy": false, 00:40:16.428 "nvme_iov_md": false 00:40:16.428 }, 00:40:16.428 "driver_specific": { 00:40:16.428 "lvol": { 00:40:16.428 "lvol_store_uuid": "2aa11762-7075-4a70-b837-fbac4b4316e8", 00:40:16.428 "base_bdev": "aio_bdev", 00:40:16.428 "thin_provision": false, 00:40:16.428 "num_allocated_clusters": 38, 00:40:16.428 "snapshot": false, 00:40:16.428 "clone": false, 00:40:16.428 "esnap_clone": false 00:40:16.428 } 00:40:16.428 } 00:40:16.428 } 00:40:16.428 ] 00:40:16.428 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:40:16.428 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2aa11762-7075-4a70-b837-fbac4b4316e8 00:40:16.428 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:40:16.685 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:40:16.685 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2aa11762-7075-4a70-b837-fbac4b4316e8 00:40:16.685 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:40:16.944 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:40:16.944 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:17.202 [2024-11-19 21:29:50.850311] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:17.202 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2aa11762-7075-4a70-b837-fbac4b4316e8 00:40:17.202 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:40:17.202 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2aa11762-7075-4a70-b837-fbac4b4316e8 00:40:17.202 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:17.202 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:17.202 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:17.202 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:17.202 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:17.202 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:17.202 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:17.202 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:17.202 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2aa11762-7075-4a70-b837-fbac4b4316e8 00:40:17.459 request: 00:40:17.459 { 00:40:17.459 "uuid": "2aa11762-7075-4a70-b837-fbac4b4316e8", 00:40:17.459 "method": "bdev_lvol_get_lvstores", 00:40:17.459 "req_id": 1 00:40:17.459 } 00:40:17.459 Got JSON-RPC error response 00:40:17.459 response: 00:40:17.459 { 00:40:17.459 "code": -19, 00:40:17.459 "message": "No such device" 00:40:17.459 } 00:40:17.459 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:40:17.459 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:17.459 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:17.459 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:17.459 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:18.027 aio_bdev 00:40:18.027 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d30c3580-7bec-4164-849a-9b02bd2d0fab 00:40:18.027 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d30c3580-7bec-4164-849a-9b02bd2d0fab 00:40:18.027 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:18.027 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:40:18.027 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:18.027 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:18.027 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:18.027 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d30c3580-7bec-4164-849a-9b02bd2d0fab -t 2000 00:40:18.596 [ 00:40:18.596 { 00:40:18.596 "name": "d30c3580-7bec-4164-849a-9b02bd2d0fab", 00:40:18.596 "aliases": [ 00:40:18.596 "lvs/lvol" 00:40:18.596 ], 00:40:18.596 "product_name": "Logical Volume", 00:40:18.596 "block_size": 4096, 00:40:18.596 "num_blocks": 38912, 00:40:18.596 "uuid": "d30c3580-7bec-4164-849a-9b02bd2d0fab", 00:40:18.596 "assigned_rate_limits": { 00:40:18.596 "rw_ios_per_sec": 0, 00:40:18.596 "rw_mbytes_per_sec": 0, 00:40:18.596 "r_mbytes_per_sec": 0, 00:40:18.596 "w_mbytes_per_sec": 0 00:40:18.596 }, 00:40:18.596 "claimed": false, 00:40:18.596 "zoned": false, 00:40:18.596 "supported_io_types": { 00:40:18.596 "read": true, 00:40:18.596 "write": true, 00:40:18.596 "unmap": true, 00:40:18.596 "flush": false, 00:40:18.596 "reset": true, 00:40:18.596 "nvme_admin": false, 00:40:18.596 "nvme_io": false, 00:40:18.596 "nvme_io_md": false, 00:40:18.596 "write_zeroes": true, 00:40:18.596 "zcopy": false, 00:40:18.596 "get_zone_info": false, 00:40:18.596 "zone_management": false, 00:40:18.596 "zone_append": false, 00:40:18.596 "compare": false, 00:40:18.596 "compare_and_write": false, 00:40:18.596 "abort": false, 00:40:18.596 "seek_hole": true, 00:40:18.596 "seek_data": true, 00:40:18.596 "copy": false, 00:40:18.596 "nvme_iov_md": false 00:40:18.596 }, 00:40:18.596 "driver_specific": { 00:40:18.596 "lvol": { 00:40:18.596 "lvol_store_uuid": "2aa11762-7075-4a70-b837-fbac4b4316e8", 00:40:18.596 "base_bdev": "aio_bdev", 00:40:18.596 "thin_provision": false, 00:40:18.596 "num_allocated_clusters": 38, 00:40:18.596 "snapshot": false, 00:40:18.596 "clone": false, 00:40:18.596 "esnap_clone": false 00:40:18.596 } 00:40:18.596 } 00:40:18.596 } 00:40:18.596 ] 00:40:18.596 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:40:18.596 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2aa11762-7075-4a70-b837-fbac4b4316e8 00:40:18.596 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:18.854 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:18.854 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2aa11762-7075-4a70-b837-fbac4b4316e8 00:40:18.854 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:19.111 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:19.111 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d30c3580-7bec-4164-849a-9b02bd2d0fab 00:40:19.369 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2aa11762-7075-4a70-b837-fbac4b4316e8 00:40:19.627 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:19.886 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:19.886 00:40:19.886 real 0m21.891s 00:40:19.886 user 0m38.458s 00:40:19.886 sys 0m5.041s 00:40:19.886 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:19.886 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:19.886 ************************************ 00:40:19.886 END TEST lvs_grow_dirty 00:40:19.886 ************************************ 00:40:19.886 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:40:19.886 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:40:19.886 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:40:19.886 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:40:19.886 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:40:19.886 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:40:19.886 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:40:19.886 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:40:19.886 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:40:19.886 nvmf_trace.0 00:40:19.886 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:40:19.886 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:40:19.886 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:19.886 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:40:19.886 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:19.886 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:40:19.886 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:19.886 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:19.886 rmmod nvme_tcp 00:40:20.145 rmmod nvme_fabrics 00:40:20.145 rmmod nvme_keyring 00:40:20.145 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:20.145 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:40:20.145 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:40:20.145 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3198919 ']' 00:40:20.145 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3198919 00:40:20.145 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3198919 ']' 00:40:20.145 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3198919 00:40:20.145 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:40:20.145 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:20.145 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3198919 00:40:20.145 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:20.145 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:20.145 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3198919' 00:40:20.145 killing process with pid 3198919 00:40:20.145 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3198919 00:40:20.145 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3198919 00:40:21.522 21:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:21.522 21:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:21.522 21:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:21.522 21:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:40:21.522 21:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:40:21.522 21:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:21.522 21:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:40:21.522 21:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:21.522 21:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:21.522 21:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:21.522 21:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:21.522 21:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:23.429 21:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:23.429 00:40:23.429 real 0m48.249s 00:40:23.429 user 1m0.817s 00:40:23.429 sys 0m8.951s 00:40:23.429 21:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:23.429 21:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:23.429 ************************************ 00:40:23.429 END TEST nvmf_lvs_grow 00:40:23.429 ************************************ 00:40:23.429 21:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:23.429 21:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:23.429 21:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:23.429 21:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:23.429 ************************************ 00:40:23.429 START TEST nvmf_bdev_io_wait 00:40:23.429 ************************************ 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:23.429 * Looking for test storage... 00:40:23.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:40:23.429 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:23.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:23.430 --rc genhtml_branch_coverage=1 00:40:23.430 --rc genhtml_function_coverage=1 00:40:23.430 --rc genhtml_legend=1 00:40:23.430 --rc geninfo_all_blocks=1 00:40:23.430 --rc geninfo_unexecuted_blocks=1 00:40:23.430 00:40:23.430 ' 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:23.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:23.430 --rc genhtml_branch_coverage=1 00:40:23.430 --rc genhtml_function_coverage=1 00:40:23.430 --rc genhtml_legend=1 00:40:23.430 --rc geninfo_all_blocks=1 00:40:23.430 --rc geninfo_unexecuted_blocks=1 00:40:23.430 00:40:23.430 ' 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:23.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:23.430 --rc genhtml_branch_coverage=1 00:40:23.430 --rc genhtml_function_coverage=1 00:40:23.430 --rc genhtml_legend=1 00:40:23.430 --rc geninfo_all_blocks=1 00:40:23.430 --rc geninfo_unexecuted_blocks=1 00:40:23.430 00:40:23.430 ' 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:23.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:23.430 --rc genhtml_branch_coverage=1 00:40:23.430 --rc genhtml_function_coverage=1 00:40:23.430 --rc genhtml_legend=1 00:40:23.430 --rc geninfo_all_blocks=1 00:40:23.430 --rc geninfo_unexecuted_blocks=1 00:40:23.430 00:40:23.430 ' 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:40:23.430 21:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:25.959 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:25.959 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:40:25.959 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:25.959 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:25.959 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:25.959 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:25.959 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:25.960 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:25.960 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:25.960 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:25.960 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:25.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:25.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:40:25.960 00:40:25.960 --- 10.0.0.2 ping statistics --- 00:40:25.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:25.960 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:40:25.960 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:25.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:25.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:40:25.961 00:40:25.961 --- 10.0.0.1 ping statistics --- 00:40:25.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:25.961 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3201693 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3201693 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3201693 ']' 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:25.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:25.961 21:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:25.961 [2024-11-19 21:29:59.475664] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:25.961 [2024-11-19 21:29:59.478176] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:40:25.961 [2024-11-19 21:29:59.478278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:25.961 [2024-11-19 21:29:59.624605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:26.219 [2024-11-19 21:29:59.762806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:26.219 [2024-11-19 21:29:59.762874] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:26.219 [2024-11-19 21:29:59.762903] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:26.219 [2024-11-19 21:29:59.762925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:26.219 [2024-11-19 21:29:59.762947] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:26.219 [2024-11-19 21:29:59.765669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:26.219 [2024-11-19 21:29:59.765750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:26.219 [2024-11-19 21:29:59.765823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:26.219 [2024-11-19 21:29:59.765833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:26.219 [2024-11-19 21:29:59.766539] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:26.785 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:26.785 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:40:26.785 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:26.785 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:26.785 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:26.785 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:26.785 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:40:26.785 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.786 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:26.786 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.786 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:40:26.786 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.786 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:27.044 [2024-11-19 21:30:00.733708] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:27.044 [2024-11-19 21:30:00.735972] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:27.044 [2024-11-19 21:30:00.736433] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:27.044 [2024-11-19 21:30:00.737034] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:27.044 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.044 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:27.044 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.044 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:27.044 [2024-11-19 21:30:00.742821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:27.044 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.044 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:27.044 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.044 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:27.303 Malloc0 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:27.303 [2024-11-19 21:30:00.871150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3201915 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3201917 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3201922 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:27.303 { 00:40:27.303 "params": { 00:40:27.303 "name": "Nvme$subsystem", 00:40:27.303 "trtype": "$TEST_TRANSPORT", 00:40:27.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:27.303 "adrfam": "ipv4", 00:40:27.303 "trsvcid": "$NVMF_PORT", 00:40:27.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:27.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:27.303 "hdgst": ${hdgst:-false}, 00:40:27.303 "ddgst": ${ddgst:-false} 00:40:27.303 }, 00:40:27.303 "method": "bdev_nvme_attach_controller" 00:40:27.303 } 00:40:27.303 EOF 00:40:27.303 )") 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3201924 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:27.303 { 00:40:27.303 "params": { 00:40:27.303 "name": "Nvme$subsystem", 00:40:27.303 "trtype": "$TEST_TRANSPORT", 00:40:27.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:27.303 "adrfam": "ipv4", 00:40:27.303 "trsvcid": "$NVMF_PORT", 00:40:27.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:27.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:27.303 "hdgst": ${hdgst:-false}, 00:40:27.303 "ddgst": ${ddgst:-false} 00:40:27.303 }, 00:40:27.303 "method": "bdev_nvme_attach_controller" 00:40:27.303 } 00:40:27.303 EOF 00:40:27.303 )") 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:40:27.303 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:27.303 { 00:40:27.303 "params": { 00:40:27.303 "name": "Nvme$subsystem", 00:40:27.303 "trtype": "$TEST_TRANSPORT", 00:40:27.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:27.304 "adrfam": "ipv4", 00:40:27.304 "trsvcid": "$NVMF_PORT", 00:40:27.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:27.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:27.304 "hdgst": ${hdgst:-false}, 00:40:27.304 "ddgst": ${ddgst:-false} 00:40:27.304 }, 00:40:27.304 "method": "bdev_nvme_attach_controller" 00:40:27.304 } 00:40:27.304 EOF 00:40:27.304 )") 00:40:27.304 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:27.304 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:27.304 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:27.304 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:27.304 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:27.304 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:27.304 { 00:40:27.304 "params": { 00:40:27.304 "name": "Nvme$subsystem", 00:40:27.304 "trtype": "$TEST_TRANSPORT", 00:40:27.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:27.304 "adrfam": "ipv4", 00:40:27.304 "trsvcid": "$NVMF_PORT", 00:40:27.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:27.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:27.304 "hdgst": ${hdgst:-false}, 00:40:27.304 "ddgst": ${ddgst:-false} 00:40:27.304 }, 00:40:27.304 "method": "bdev_nvme_attach_controller" 00:40:27.304 } 00:40:27.304 EOF 00:40:27.304 )") 00:40:27.304 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:27.304 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3201915 00:40:27.304 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:27.304 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:27.304 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:27.304 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:27.304 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:27.304 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:27.304 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:27.304 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:27.304 "params": { 00:40:27.304 "name": "Nvme1", 00:40:27.304 "trtype": "tcp", 00:40:27.304 "traddr": "10.0.0.2", 00:40:27.304 "adrfam": "ipv4", 00:40:27.304 "trsvcid": "4420", 00:40:27.304 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:27.304 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:27.304 "hdgst": false, 00:40:27.304 "ddgst": false 00:40:27.304 }, 00:40:27.304 "method": "bdev_nvme_attach_controller" 00:40:27.304 }' 00:40:27.304 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:27.304 "params": { 00:40:27.304 "name": "Nvme1", 00:40:27.304 "trtype": "tcp", 00:40:27.304 "traddr": "10.0.0.2", 00:40:27.304 "adrfam": "ipv4", 00:40:27.304 "trsvcid": "4420", 00:40:27.304 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:27.304 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:27.304 "hdgst": false, 00:40:27.304 "ddgst": false 00:40:27.304 }, 00:40:27.304 "method": "bdev_nvme_attach_controller" 00:40:27.304 }' 00:40:27.304 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:27.304 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:27.304 "params": { 00:40:27.304 "name": "Nvme1", 00:40:27.304 "trtype": "tcp", 00:40:27.304 "traddr": "10.0.0.2", 00:40:27.304 "adrfam": "ipv4", 00:40:27.304 "trsvcid": "4420", 00:40:27.304 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:27.304 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:27.304 "hdgst": false, 00:40:27.304 "ddgst": false 00:40:27.304 }, 00:40:27.304 "method": "bdev_nvme_attach_controller" 00:40:27.304 }' 00:40:27.304 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:27.304 21:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:27.304 "params": { 00:40:27.304 "name": "Nvme1", 00:40:27.304 "trtype": "tcp", 00:40:27.304 "traddr": "10.0.0.2", 00:40:27.304 "adrfam": "ipv4", 00:40:27.304 "trsvcid": "4420", 00:40:27.304 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:27.304 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:27.304 "hdgst": false, 00:40:27.304 "ddgst": false 00:40:27.304 }, 00:40:27.304 "method": "bdev_nvme_attach_controller" 00:40:27.304 }' 00:40:27.304 [2024-11-19 21:30:00.959980] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:40:27.304 [2024-11-19 21:30:00.959982] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:40:27.304 [2024-11-19 21:30:00.960147] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 21:30:00.960149] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:40:27.304 --proc-type=auto ] 00:40:27.304 [2024-11-19 21:30:00.961346] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:40:27.304 [2024-11-19 21:30:00.961339] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:40:27.304 [2024-11-19 21:30:00.961479] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 21:30:00.961482] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:40:27.304 --proc-type=auto ] 00:40:27.562 [2024-11-19 21:30:01.216512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:27.562 [2024-11-19 21:30:01.327324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:27.562 [2024-11-19 21:30:01.340118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:27.820 [2024-11-19 21:30:01.438607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:27.820 [2024-11-19 21:30:01.449691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:27.820 [2024-11-19 21:30:01.485956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:27.820 [2024-11-19 21:30:01.555539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:40:27.820 [2024-11-19 21:30:01.602590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:28.077 Running I/O for 1 seconds... 00:40:28.335 Running I/O for 1 seconds... 00:40:28.335 Running I/O for 1 seconds... 00:40:28.335 Running I/O for 1 seconds... 00:40:29.294 5616.00 IOPS, 21.94 MiB/s 00:40:29.294 Latency(us) 00:40:29.294 [2024-11-19T20:30:03.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:29.294 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:40:29.294 Nvme1n1 : 1.03 5570.17 21.76 0.00 0.00 22594.36 6844.87 39418.69 00:40:29.294 [2024-11-19T20:30:03.089Z] =================================================================================================================== 00:40:29.294 [2024-11-19T20:30:03.089Z] Total : 5570.17 21.76 0.00 0.00 22594.36 6844.87 39418.69 00:40:29.294 6678.00 IOPS, 26.09 MiB/s 00:40:29.294 Latency(us) 00:40:29.294 [2024-11-19T20:30:03.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:29.294 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:40:29.294 Nvme1n1 : 1.01 6719.03 26.25 0.00 0.00 18924.78 6505.05 26991.12 00:40:29.294 [2024-11-19T20:30:03.089Z] =================================================================================================================== 00:40:29.294 [2024-11-19T20:30:03.089Z] Total : 6719.03 26.25 0.00 0.00 18924.78 6505.05 26991.12 00:40:29.294 157784.00 IOPS, 616.34 MiB/s 00:40:29.294 Latency(us) 00:40:29.294 [2024-11-19T20:30:03.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:29.294 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:40:29.294 Nvme1n1 : 1.00 157464.95 615.10 0.00 0.00 808.64 374.71 2002.49 00:40:29.294 [2024-11-19T20:30:03.089Z] =================================================================================================================== 00:40:29.294 [2024-11-19T20:30:03.089Z] Total : 157464.95 615.10 0.00 0.00 808.64 374.71 2002.49 00:40:29.551 5754.00 IOPS, 22.48 MiB/s 00:40:29.551 Latency(us) 00:40:29.551 [2024-11-19T20:30:03.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:29.551 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:40:29.551 Nvme1n1 : 1.01 5862.27 22.90 0.00 0.00 21748.16 5995.33 48933.55 00:40:29.551 [2024-11-19T20:30:03.346Z] =================================================================================================================== 00:40:29.551 [2024-11-19T20:30:03.346Z] Total : 5862.27 22.90 0.00 0.00 21748.16 5995.33 48933.55 00:40:29.809 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3201917 00:40:29.809 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3201922 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3201924 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:30.068 rmmod nvme_tcp 00:40:30.068 rmmod nvme_fabrics 00:40:30.068 rmmod nvme_keyring 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3201693 ']' 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3201693 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3201693 ']' 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3201693 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:30.068 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3201693 00:40:30.326 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:30.326 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:30.326 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3201693' 00:40:30.326 killing process with pid 3201693 00:40:30.326 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3201693 00:40:30.326 21:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3201693 00:40:31.259 21:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:31.259 21:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:31.259 21:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:31.259 21:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:40:31.259 21:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:40:31.259 21:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:31.259 21:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:40:31.259 21:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:31.259 21:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:31.259 21:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:31.259 21:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:31.259 21:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:33.216 21:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:33.216 00:40:33.216 real 0m9.874s 00:40:33.216 user 0m21.763s 00:40:33.216 sys 0m4.934s 00:40:33.216 21:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:33.216 21:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:33.216 ************************************ 00:40:33.216 END TEST nvmf_bdev_io_wait 00:40:33.216 ************************************ 00:40:33.216 21:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:33.216 21:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:33.216 21:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:33.216 21:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:33.216 ************************************ 00:40:33.216 START TEST nvmf_queue_depth 00:40:33.216 ************************************ 00:40:33.216 21:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:33.216 * Looking for test storage... 00:40:33.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:33.216 21:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:33.216 21:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:40:33.216 21:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:33.474 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:33.474 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:33.474 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:33.474 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:33.474 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:40:33.474 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:40:33.474 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:40:33.474 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:40:33.474 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:40:33.474 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:40:33.474 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:40:33.474 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:33.474 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:40:33.474 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:40:33.474 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:33.474 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:33.474 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:40:33.474 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:40:33.474 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:33.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:33.475 --rc genhtml_branch_coverage=1 00:40:33.475 --rc genhtml_function_coverage=1 00:40:33.475 --rc genhtml_legend=1 00:40:33.475 --rc geninfo_all_blocks=1 00:40:33.475 --rc geninfo_unexecuted_blocks=1 00:40:33.475 00:40:33.475 ' 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:33.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:33.475 --rc genhtml_branch_coverage=1 00:40:33.475 --rc genhtml_function_coverage=1 00:40:33.475 --rc genhtml_legend=1 00:40:33.475 --rc geninfo_all_blocks=1 00:40:33.475 --rc geninfo_unexecuted_blocks=1 00:40:33.475 00:40:33.475 ' 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:33.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:33.475 --rc genhtml_branch_coverage=1 00:40:33.475 --rc genhtml_function_coverage=1 00:40:33.475 --rc genhtml_legend=1 00:40:33.475 --rc geninfo_all_blocks=1 00:40:33.475 --rc geninfo_unexecuted_blocks=1 00:40:33.475 00:40:33.475 ' 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:33.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:33.475 --rc genhtml_branch_coverage=1 00:40:33.475 --rc genhtml_function_coverage=1 00:40:33.475 --rc genhtml_legend=1 00:40:33.475 --rc geninfo_all_blocks=1 00:40:33.475 --rc geninfo_unexecuted_blocks=1 00:40:33.475 00:40:33.475 ' 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:40:33.475 21:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:35.376 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:35.377 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:35.377 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:35.377 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:35.377 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:35.377 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:35.635 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:35.635 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:35.635 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:35.635 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:35.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:35.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:40:35.635 00:40:35.635 --- 10.0.0.2 ping statistics --- 00:40:35.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:35.635 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:40:35.635 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:35.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:35.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:40:35.635 00:40:35.635 --- 10.0.0.1 ping statistics --- 00:40:35.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:35.635 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:40:35.635 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:35.635 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:40:35.635 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:35.635 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:35.635 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:35.635 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:35.635 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:35.635 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:35.635 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:35.635 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:40:35.635 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:35.636 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:35.636 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:35.636 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3204838 00:40:35.636 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:35.636 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3204838 00:40:35.636 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3204838 ']' 00:40:35.636 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:35.636 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:35.636 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:35.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:35.636 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:35.636 21:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:35.636 [2024-11-19 21:30:09.383564] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:35.636 [2024-11-19 21:30:09.386247] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:40:35.636 [2024-11-19 21:30:09.386347] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:35.893 [2024-11-19 21:30:09.544757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:35.893 [2024-11-19 21:30:09.664593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:35.893 [2024-11-19 21:30:09.664663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:35.893 [2024-11-19 21:30:09.664703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:35.893 [2024-11-19 21:30:09.664723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:35.893 [2024-11-19 21:30:09.664742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:35.893 [2024-11-19 21:30:09.666300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:36.459 [2024-11-19 21:30:09.991579] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:36.459 [2024-11-19 21:30:09.991988] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:36.717 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:36.717 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:40:36.717 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:36.717 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:36.717 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:36.717 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:36.717 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:36.717 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.717 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:36.717 [2024-11-19 21:30:10.431456] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:36.717 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.717 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:36.717 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.717 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:36.976 Malloc0 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:36.976 [2024-11-19 21:30:10.555558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3205024 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3205024 /var/tmp/bdevperf.sock 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3205024 ']' 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:36.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:36.976 21:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:36.976 [2024-11-19 21:30:10.650964] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:40:36.976 [2024-11-19 21:30:10.651114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205024 ] 00:40:37.234 [2024-11-19 21:30:10.810769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:37.234 [2024-11-19 21:30:10.947935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:38.169 21:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:38.169 21:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:40:38.169 21:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:38.169 21:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:38.169 21:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:38.169 NVMe0n1 00:40:38.169 21:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:38.169 21:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:38.169 Running I/O for 10 seconds... 00:40:40.477 6144.00 IOPS, 24.00 MiB/s [2024-11-19T20:30:15.208Z] 6064.50 IOPS, 23.69 MiB/s [2024-11-19T20:30:16.142Z] 6004.33 IOPS, 23.45 MiB/s [2024-11-19T20:30:17.076Z] 5947.50 IOPS, 23.23 MiB/s [2024-11-19T20:30:18.011Z] 5968.20 IOPS, 23.31 MiB/s [2024-11-19T20:30:18.946Z] 6062.50 IOPS, 23.68 MiB/s [2024-11-19T20:30:19.880Z] 6077.00 IOPS, 23.74 MiB/s [2024-11-19T20:30:21.255Z] 6106.62 IOPS, 23.85 MiB/s [2024-11-19T20:30:22.190Z] 6108.56 IOPS, 23.86 MiB/s [2024-11-19T20:30:22.190Z] 6094.00 IOPS, 23.80 MiB/s 00:40:48.395 Latency(us) 00:40:48.395 [2024-11-19T20:30:22.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:48.395 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:40:48.395 Verification LBA range: start 0x0 length 0x4000 00:40:48.395 NVMe0n1 : 10.11 6118.84 23.90 0.00 0.00 166219.41 24369.68 100197.26 00:40:48.395 [2024-11-19T20:30:22.190Z] =================================================================================================================== 00:40:48.395 [2024-11-19T20:30:22.190Z] Total : 6118.84 23.90 0.00 0.00 166219.41 24369.68 100197.26 00:40:48.395 { 00:40:48.395 "results": [ 00:40:48.395 { 00:40:48.395 "job": "NVMe0n1", 00:40:48.395 "core_mask": "0x1", 00:40:48.395 "workload": "verify", 00:40:48.395 "status": "finished", 00:40:48.395 "verify_range": { 00:40:48.395 "start": 0, 00:40:48.395 "length": 16384 00:40:48.395 }, 00:40:48.395 "queue_depth": 1024, 00:40:48.395 "io_size": 4096, 00:40:48.395 "runtime": 10.110419, 00:40:48.395 "iops": 6118.8364201325385, 00:40:48.395 "mibps": 23.90170476614273, 00:40:48.395 "io_failed": 0, 00:40:48.395 "io_timeout": 0, 00:40:48.395 "avg_latency_us": 166219.40854078962, 00:40:48.395 "min_latency_us": 24369.682962962965, 00:40:48.395 "max_latency_us": 100197.26222222223 00:40:48.395 } 00:40:48.395 ], 00:40:48.395 "core_count": 1 00:40:48.395 } 00:40:48.395 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3205024 00:40:48.395 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3205024 ']' 00:40:48.395 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3205024 00:40:48.395 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:40:48.395 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:48.395 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3205024 00:40:48.395 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:48.395 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:48.395 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3205024' 00:40:48.395 killing process with pid 3205024 00:40:48.395 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3205024 00:40:48.395 Received shutdown signal, test time was about 10.000000 seconds 00:40:48.395 00:40:48.395 Latency(us) 00:40:48.395 [2024-11-19T20:30:22.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:48.395 [2024-11-19T20:30:22.190Z] =================================================================================================================== 00:40:48.395 [2024-11-19T20:30:22.190Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:48.395 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3205024 00:40:49.329 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:40:49.329 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:40:49.329 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:49.329 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:40:49.329 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:49.329 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:40:49.329 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:49.329 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:49.329 rmmod nvme_tcp 00:40:49.329 rmmod nvme_fabrics 00:40:49.329 rmmod nvme_keyring 00:40:49.329 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:49.329 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:40:49.329 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:40:49.329 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3204838 ']' 00:40:49.329 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3204838 00:40:49.329 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3204838 ']' 00:40:49.329 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3204838 00:40:49.329 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:40:49.329 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:49.329 21:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3204838 00:40:49.329 21:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:49.329 21:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:49.329 21:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3204838' 00:40:49.329 killing process with pid 3204838 00:40:49.329 21:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3204838 00:40:49.329 21:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3204838 00:40:50.702 21:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:50.702 21:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:50.702 21:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:50.702 21:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:40:50.702 21:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:40:50.702 21:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:50.702 21:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:40:50.702 21:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:50.702 21:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:50.702 21:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:50.702 21:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:50.702 21:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:52.602 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:52.602 00:40:52.602 real 0m19.464s 00:40:52.602 user 0m26.677s 00:40:52.602 sys 0m3.809s 00:40:52.602 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:52.602 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:52.602 ************************************ 00:40:52.602 END TEST nvmf_queue_depth 00:40:52.602 ************************************ 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:52.861 ************************************ 00:40:52.861 START TEST nvmf_target_multipath 00:40:52.861 ************************************ 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:52.861 * Looking for test storage... 00:40:52.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:52.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.861 --rc genhtml_branch_coverage=1 00:40:52.861 --rc genhtml_function_coverage=1 00:40:52.861 --rc genhtml_legend=1 00:40:52.861 --rc geninfo_all_blocks=1 00:40:52.861 --rc geninfo_unexecuted_blocks=1 00:40:52.861 00:40:52.861 ' 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:52.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.861 --rc genhtml_branch_coverage=1 00:40:52.861 --rc genhtml_function_coverage=1 00:40:52.861 --rc genhtml_legend=1 00:40:52.861 --rc geninfo_all_blocks=1 00:40:52.861 --rc geninfo_unexecuted_blocks=1 00:40:52.861 00:40:52.861 ' 00:40:52.861 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:52.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.861 --rc genhtml_branch_coverage=1 00:40:52.861 --rc genhtml_function_coverage=1 00:40:52.861 --rc genhtml_legend=1 00:40:52.861 --rc geninfo_all_blocks=1 00:40:52.862 --rc geninfo_unexecuted_blocks=1 00:40:52.862 00:40:52.862 ' 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:52.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.862 --rc genhtml_branch_coverage=1 00:40:52.862 --rc genhtml_function_coverage=1 00:40:52.862 --rc genhtml_legend=1 00:40:52.862 --rc geninfo_all_blocks=1 00:40:52.862 --rc geninfo_unexecuted_blocks=1 00:40:52.862 00:40:52.862 ' 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:40:52.862 21:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:55.395 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:55.395 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:40:55.395 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:55.395 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:55.395 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:55.395 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:55.396 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:55.396 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:55.396 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:55.396 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:55.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:55.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:40:55.396 00:40:55.396 --- 10.0.0.2 ping statistics --- 00:40:55.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:55.396 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:55.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:55.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:40:55.396 00:40:55.396 --- 10.0.0.1 ping statistics --- 00:40:55.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:55.396 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:40:55.396 only one NIC for nvmf test 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:55.396 rmmod nvme_tcp 00:40:55.396 rmmod nvme_fabrics 00:40:55.396 rmmod nvme_keyring 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:55.396 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:55.397 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:40:55.397 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:55.397 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:40:55.397 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:55.397 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:55.397 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:55.397 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:55.397 21:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:57.301 00:40:57.301 real 0m4.400s 00:40:57.301 user 0m0.891s 00:40:57.301 sys 0m1.521s 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:57.301 ************************************ 00:40:57.301 END TEST nvmf_target_multipath 00:40:57.301 ************************************ 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:57.301 ************************************ 00:40:57.301 START TEST nvmf_zcopy 00:40:57.301 ************************************ 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:57.301 * Looking for test storage... 00:40:57.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:40:57.301 21:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:57.301 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:57.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:57.302 --rc genhtml_branch_coverage=1 00:40:57.302 --rc genhtml_function_coverage=1 00:40:57.302 --rc genhtml_legend=1 00:40:57.302 --rc geninfo_all_blocks=1 00:40:57.302 --rc geninfo_unexecuted_blocks=1 00:40:57.302 00:40:57.302 ' 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:57.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:57.302 --rc genhtml_branch_coverage=1 00:40:57.302 --rc genhtml_function_coverage=1 00:40:57.302 --rc genhtml_legend=1 00:40:57.302 --rc geninfo_all_blocks=1 00:40:57.302 --rc geninfo_unexecuted_blocks=1 00:40:57.302 00:40:57.302 ' 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:57.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:57.302 --rc genhtml_branch_coverage=1 00:40:57.302 --rc genhtml_function_coverage=1 00:40:57.302 --rc genhtml_legend=1 00:40:57.302 --rc geninfo_all_blocks=1 00:40:57.302 --rc geninfo_unexecuted_blocks=1 00:40:57.302 00:40:57.302 ' 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:57.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:57.302 --rc genhtml_branch_coverage=1 00:40:57.302 --rc genhtml_function_coverage=1 00:40:57.302 --rc genhtml_legend=1 00:40:57.302 --rc geninfo_all_blocks=1 00:40:57.302 --rc geninfo_unexecuted_blocks=1 00:40:57.302 00:40:57.302 ' 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:40:57.302 21:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:59.871 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:59.871 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:40:59.871 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:59.871 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:59.871 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:59.871 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:59.871 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:59.871 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:40:59.871 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:59.871 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:40:59.871 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:40:59.871 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:40:59.871 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:40:59.871 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:40:59.871 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:59.872 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:59.872 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:59.872 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:59.872 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:59.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:59.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:40:59.872 00:40:59.872 --- 10.0.0.2 ping statistics --- 00:40:59.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:59.872 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:59.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:59.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:40:59.872 00:40:59.872 --- 10.0.0.1 ping statistics --- 00:40:59.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:59.872 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:59.872 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:59.873 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:59.873 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:40:59.873 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:59.873 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:59.873 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:59.873 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3210425 00:40:59.873 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:59.873 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3210425 00:40:59.873 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3210425 ']' 00:40:59.873 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:59.873 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:59.873 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:59.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:59.873 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:59.873 21:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:59.873 [2024-11-19 21:30:33.469711] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:59.873 [2024-11-19 21:30:33.472504] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:40:59.873 [2024-11-19 21:30:33.472599] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:59.873 [2024-11-19 21:30:33.619977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:00.131 [2024-11-19 21:30:33.735751] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:00.131 [2024-11-19 21:30:33.735832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:00.131 [2024-11-19 21:30:33.735855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:00.131 [2024-11-19 21:30:33.735872] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:00.131 [2024-11-19 21:30:33.735890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:00.131 [2024-11-19 21:30:33.737220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:00.391 [2024-11-19 21:30:34.071614] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:00.391 [2024-11-19 21:30:34.072036] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:00.659 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:00.659 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:41:00.659 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:00.659 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:00.659 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:00.969 [2024-11-19 21:30:34.466185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:00.969 [2024-11-19 21:30:34.482476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:00.969 malloc0 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:00.969 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:00.969 { 00:41:00.969 "params": { 00:41:00.969 "name": "Nvme$subsystem", 00:41:00.969 "trtype": "$TEST_TRANSPORT", 00:41:00.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:00.970 "adrfam": "ipv4", 00:41:00.970 "trsvcid": "$NVMF_PORT", 00:41:00.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:00.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:00.970 "hdgst": ${hdgst:-false}, 00:41:00.970 "ddgst": ${ddgst:-false} 00:41:00.970 }, 00:41:00.970 "method": "bdev_nvme_attach_controller" 00:41:00.970 } 00:41:00.970 EOF 00:41:00.970 )") 00:41:00.970 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:41:00.970 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:41:00.970 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:41:00.970 21:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:00.970 "params": { 00:41:00.970 "name": "Nvme1", 00:41:00.970 "trtype": "tcp", 00:41:00.970 "traddr": "10.0.0.2", 00:41:00.970 "adrfam": "ipv4", 00:41:00.970 "trsvcid": "4420", 00:41:00.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:00.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:00.970 "hdgst": false, 00:41:00.970 "ddgst": false 00:41:00.970 }, 00:41:00.970 "method": "bdev_nvme_attach_controller" 00:41:00.970 }' 00:41:00.970 [2024-11-19 21:30:34.627945] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:41:00.970 [2024-11-19 21:30:34.628094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210580 ] 00:41:01.228 [2024-11-19 21:30:34.777220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:01.228 [2024-11-19 21:30:34.913910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:01.794 Running I/O for 10 seconds... 00:41:04.102 4104.00 IOPS, 32.06 MiB/s [2024-11-19T20:30:38.834Z] 4214.00 IOPS, 32.92 MiB/s [2024-11-19T20:30:39.768Z] 4232.67 IOPS, 33.07 MiB/s [2024-11-19T20:30:40.702Z] 4226.50 IOPS, 33.02 MiB/s [2024-11-19T20:30:41.637Z] 4223.40 IOPS, 33.00 MiB/s [2024-11-19T20:30:42.572Z] 4229.00 IOPS, 33.04 MiB/s [2024-11-19T20:30:43.507Z] 4244.00 IOPS, 33.16 MiB/s [2024-11-19T20:30:44.879Z] 4249.50 IOPS, 33.20 MiB/s [2024-11-19T20:30:45.814Z] 4244.67 IOPS, 33.16 MiB/s [2024-11-19T20:30:45.814Z] 4245.40 IOPS, 33.17 MiB/s 00:41:12.019 Latency(us) 00:41:12.019 [2024-11-19T20:30:45.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:12.019 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:41:12.019 Verification LBA range: start 0x0 length 0x1000 00:41:12.019 Nvme1n1 : 10.02 4246.76 33.18 0.00 0.00 30058.48 567.37 41554.68 00:41:12.019 [2024-11-19T20:30:45.814Z] =================================================================================================================== 00:41:12.019 [2024-11-19T20:30:45.814Z] Total : 4246.76 33.18 0.00 0.00 30058.48 567.37 41554.68 00:41:12.953 21:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3212014 00:41:12.953 21:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:41:12.953 21:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:12.953 21:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:41:12.953 21:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:41:12.953 21:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:41:12.953 21:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:41:12.953 21:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:12.953 21:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:12.953 { 00:41:12.953 "params": { 00:41:12.953 "name": "Nvme$subsystem", 00:41:12.953 "trtype": "$TEST_TRANSPORT", 00:41:12.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:12.953 "adrfam": "ipv4", 00:41:12.953 "trsvcid": "$NVMF_PORT", 00:41:12.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:12.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:12.953 "hdgst": ${hdgst:-false}, 00:41:12.953 "ddgst": ${ddgst:-false} 00:41:12.953 }, 00:41:12.953 "method": "bdev_nvme_attach_controller" 00:41:12.953 } 00:41:12.953 EOF 00:41:12.953 )") 00:41:12.953 21:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:41:12.953 21:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:41:12.953 21:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:41:12.953 [2024-11-19 21:30:46.430124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.953 21:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:12.953 "params": { 00:41:12.953 "name": "Nvme1", 00:41:12.953 "trtype": "tcp", 00:41:12.953 "traddr": "10.0.0.2", 00:41:12.953 "adrfam": "ipv4", 00:41:12.953 "trsvcid": "4420", 00:41:12.953 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:12.953 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:12.953 "hdgst": false, 00:41:12.953 "ddgst": false 00:41:12.953 }, 00:41:12.953 "method": "bdev_nvme_attach_controller" 00:41:12.953 }' 00:41:12.953 [2024-11-19 21:30:46.430178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.953 [2024-11-19 21:30:46.438026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.953 [2024-11-19 21:30:46.438081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.953 [2024-11-19 21:30:46.445971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.446001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.453988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.454018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.461999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.462030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.469968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.470000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.477976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.478002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.485976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.486003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.493955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.493981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.501978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.502005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.506990] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:41:12.954 [2024-11-19 21:30:46.507123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3212014 ] 00:41:12.954 [2024-11-19 21:30:46.509952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.509978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.517971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.517998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.525992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.526029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.533952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.533978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.541983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.542025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.549969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.549995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.557972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.557999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.565969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.565995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.573955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.573981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.581974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.582001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.590017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.590050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.597987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.598020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.606008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.606040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.614004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.614036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.621984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.622016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.630005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.630037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.638008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.638040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.646007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.646040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.653305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:12.954 [2024-11-19 21:30:46.654022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.654055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.661984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.662016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.670043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.670096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.678054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.678124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.685981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.686013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.694006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.694039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.701980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.702012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.710020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.710064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.718004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.718036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.725979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.726010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.734013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.734046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.954 [2024-11-19 21:30:46.742029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.954 [2024-11-19 21:30:46.742078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.213 [2024-11-19 21:30:46.750005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.213 [2024-11-19 21:30:46.750037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.213 [2024-11-19 21:30:46.758004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.213 [2024-11-19 21:30:46.758036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.213 [2024-11-19 21:30:46.765992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.213 [2024-11-19 21:30:46.766024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.213 [2024-11-19 21:30:46.774007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.213 [2024-11-19 21:30:46.774040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.213 [2024-11-19 21:30:46.782002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.213 [2024-11-19 21:30:46.782035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.213 [2024-11-19 21:30:46.789980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.213 [2024-11-19 21:30:46.790018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.213 [2024-11-19 21:30:46.790989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:13.213 [2024-11-19 21:30:46.798010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.213 [2024-11-19 21:30:46.798043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.213 [2024-11-19 21:30:46.806037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.213 [2024-11-19 21:30:46.806081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.213 [2024-11-19 21:30:46.814030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.213 [2024-11-19 21:30:46.814083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.213 [2024-11-19 21:30:46.822020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.213 [2024-11-19 21:30:46.822053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.213 [2024-11-19 21:30:46.829988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.213 [2024-11-19 21:30:46.830020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.213 [2024-11-19 21:30:46.838004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.213 [2024-11-19 21:30:46.838036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.213 [2024-11-19 21:30:46.846045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.213 [2024-11-19 21:30:46.846089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.213 [2024-11-19 21:30:46.853985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.213 [2024-11-19 21:30:46.854016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.213 [2024-11-19 21:30:46.862005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.213 [2024-11-19 21:30:46.862038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.213 [2024-11-19 21:30:46.870011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.213 [2024-11-19 21:30:46.870043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.213 [2024-11-19 21:30:46.878005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.213 [2024-11-19 21:30:46.878043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.214 [2024-11-19 21:30:46.886088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.214 [2024-11-19 21:30:46.886147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.214 [2024-11-19 21:30:46.894035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.214 [2024-11-19 21:30:46.894094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.214 [2024-11-19 21:30:46.902085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.214 [2024-11-19 21:30:46.902145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.214 [2024-11-19 21:30:46.910030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.214 [2024-11-19 21:30:46.910087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.214 [2024-11-19 21:30:46.917986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.214 [2024-11-19 21:30:46.918018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.214 [2024-11-19 21:30:46.926006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.214 [2024-11-19 21:30:46.926039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.214 [2024-11-19 21:30:46.934006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.214 [2024-11-19 21:30:46.934050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.214 [2024-11-19 21:30:46.942006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.214 [2024-11-19 21:30:46.942039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.214 [2024-11-19 21:30:46.950030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.214 [2024-11-19 21:30:46.950078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.214 [2024-11-19 21:30:46.957980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.214 [2024-11-19 21:30:46.958012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.214 [2024-11-19 21:30:46.966012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.214 [2024-11-19 21:30:46.966045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.214 [2024-11-19 21:30:46.974007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.214 [2024-11-19 21:30:46.974038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.214 [2024-11-19 21:30:46.981980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.214 [2024-11-19 21:30:46.982011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.214 [2024-11-19 21:30:46.990003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.214 [2024-11-19 21:30:46.990036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.214 [2024-11-19 21:30:46.998004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.214 [2024-11-19 21:30:46.998036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.214 [2024-11-19 21:30:47.005989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.214 [2024-11-19 21:30:47.006021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.014006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.014038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.021983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.022015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.030015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.030048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.038099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.038158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.046040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.046098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.054057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.054101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.062005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.062037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.069986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.070017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.078004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.078037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.085987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.086019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.094009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.094042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.102010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.102042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.109987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.110019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.118009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.118041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.126003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.126035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.134004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.134036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.142005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.142037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.149981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.150014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.158028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.158061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.166018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.166056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.174035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.174089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.182019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.182056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.190170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.190205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.198020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.198067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.206016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.206064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.213990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.214028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.222037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.222084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.230006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.230039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.237985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.238017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.473 [2024-11-19 21:30:47.246019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.473 [2024-11-19 21:30:47.246056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.474 [2024-11-19 21:30:47.254008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.474 [2024-11-19 21:30:47.254042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.474 [2024-11-19 21:30:47.261982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.474 [2024-11-19 21:30:47.262015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.732 [2024-11-19 21:30:47.270063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.732 [2024-11-19 21:30:47.270108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.732 [2024-11-19 21:30:47.277993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.732 [2024-11-19 21:30:47.278026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.732 [2024-11-19 21:30:47.286013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.732 [2024-11-19 21:30:47.286049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.732 [2024-11-19 21:30:47.294019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.732 [2024-11-19 21:30:47.294066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.732 [2024-11-19 21:30:47.301985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.732 [2024-11-19 21:30:47.302018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.732 [2024-11-19 21:30:47.310005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.732 [2024-11-19 21:30:47.310049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.732 [2024-11-19 21:30:47.318034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.732 [2024-11-19 21:30:47.318098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.732 [2024-11-19 21:30:47.325985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.732 [2024-11-19 21:30:47.326018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.732 [2024-11-19 21:30:47.334009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.732 [2024-11-19 21:30:47.334045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.732 [2024-11-19 21:30:47.391308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.732 [2024-11-19 21:30:47.391350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.732 [2024-11-19 21:30:47.397997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.732 [2024-11-19 21:30:47.398028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.732 [2024-11-19 21:30:47.406014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.732 [2024-11-19 21:30:47.406051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.732 Running I/O for 5 seconds... 00:41:13.732 [2024-11-19 21:30:47.426035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.732 [2024-11-19 21:30:47.426086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.732 [2024-11-19 21:30:47.442066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.732 [2024-11-19 21:30:47.442130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.732 [2024-11-19 21:30:47.458736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.732 [2024-11-19 21:30:47.458777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.732 [2024-11-19 21:30:47.474263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.732 [2024-11-19 21:30:47.474305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.732 [2024-11-19 21:30:47.489990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.732 [2024-11-19 21:30:47.490030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.732 [2024-11-19 21:30:47.506218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.733 [2024-11-19 21:30:47.506253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.733 [2024-11-19 21:30:47.521734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.733 [2024-11-19 21:30:47.521774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.991 [2024-11-19 21:30:47.537089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.991 [2024-11-19 21:30:47.537144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.991 [2024-11-19 21:30:47.552900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.991 [2024-11-19 21:30:47.552940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.991 [2024-11-19 21:30:47.568139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.991 [2024-11-19 21:30:47.568173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.991 [2024-11-19 21:30:47.582731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.991 [2024-11-19 21:30:47.582771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.991 [2024-11-19 21:30:47.597728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.991 [2024-11-19 21:30:47.597768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.991 [2024-11-19 21:30:47.613876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.991 [2024-11-19 21:30:47.613917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.991 [2024-11-19 21:30:47.628725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.991 [2024-11-19 21:30:47.628766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.991 [2024-11-19 21:30:47.644598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.991 [2024-11-19 21:30:47.644638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.991 [2024-11-19 21:30:47.659537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.991 [2024-11-19 21:30:47.659578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.991 [2024-11-19 21:30:47.674004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.991 [2024-11-19 21:30:47.674044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.991 [2024-11-19 21:30:47.688913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.991 [2024-11-19 21:30:47.688954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.991 [2024-11-19 21:30:47.703953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.991 [2024-11-19 21:30:47.703994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.991 [2024-11-19 21:30:47.718537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.991 [2024-11-19 21:30:47.718576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.991 [2024-11-19 21:30:47.736001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.991 [2024-11-19 21:30:47.736041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.991 [2024-11-19 21:30:47.749096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.991 [2024-11-19 21:30:47.749136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.991 [2024-11-19 21:30:47.765512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.991 [2024-11-19 21:30:47.765565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.991 [2024-11-19 21:30:47.780327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.991 [2024-11-19 21:30:47.780381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.250 [2024-11-19 21:30:47.795188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.250 [2024-11-19 21:30:47.795223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.250 [2024-11-19 21:30:47.810582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.250 [2024-11-19 21:30:47.810621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.250 [2024-11-19 21:30:47.825581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.250 [2024-11-19 21:30:47.825621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.250 [2024-11-19 21:30:47.839839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.250 [2024-11-19 21:30:47.839879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.250 [2024-11-19 21:30:47.854728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.250 [2024-11-19 21:30:47.854768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.250 [2024-11-19 21:30:47.869395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.250 [2024-11-19 21:30:47.869436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.250 [2024-11-19 21:30:47.884016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.250 [2024-11-19 21:30:47.884056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.250 [2024-11-19 21:30:47.899687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.250 [2024-11-19 21:30:47.899727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.250 [2024-11-19 21:30:47.914682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.250 [2024-11-19 21:30:47.914722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.250 [2024-11-19 21:30:47.930201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.250 [2024-11-19 21:30:47.930234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.250 [2024-11-19 21:30:47.944755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.250 [2024-11-19 21:30:47.944789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.250 [2024-11-19 21:30:47.959369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.250 [2024-11-19 21:30:47.959402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.250 [2024-11-19 21:30:47.974149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.251 [2024-11-19 21:30:47.974184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.251 [2024-11-19 21:30:47.988839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.251 [2024-11-19 21:30:47.988873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.251 [2024-11-19 21:30:48.003534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.251 [2024-11-19 21:30:48.003573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.251 [2024-11-19 21:30:48.018815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.251 [2024-11-19 21:30:48.018849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.251 [2024-11-19 21:30:48.039055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.251 [2024-11-19 21:30:48.039117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.509 [2024-11-19 21:30:48.052370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.509 [2024-11-19 21:30:48.052433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.510 [2024-11-19 21:30:48.069120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.510 [2024-11-19 21:30:48.069156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.510 [2024-11-19 21:30:48.084437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.510 [2024-11-19 21:30:48.084476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.510 [2024-11-19 21:30:48.099774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.510 [2024-11-19 21:30:48.099813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.510 [2024-11-19 21:30:48.113993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.510 [2024-11-19 21:30:48.114033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.510 [2024-11-19 21:30:48.128582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.510 [2024-11-19 21:30:48.128623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.510 [2024-11-19 21:30:48.143204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.510 [2024-11-19 21:30:48.143237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.510 [2024-11-19 21:30:48.158840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.510 [2024-11-19 21:30:48.158881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.510 [2024-11-19 21:30:48.174200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.510 [2024-11-19 21:30:48.174234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.510 [2024-11-19 21:30:48.189930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.510 [2024-11-19 21:30:48.189971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.510 [2024-11-19 21:30:48.205787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.510 [2024-11-19 21:30:48.205830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.510 [2024-11-19 21:30:48.220589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.510 [2024-11-19 21:30:48.220629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.510 [2024-11-19 21:30:48.235082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.510 [2024-11-19 21:30:48.235132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.510 [2024-11-19 21:30:48.251050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.510 [2024-11-19 21:30:48.251098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.510 [2024-11-19 21:30:48.267213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.510 [2024-11-19 21:30:48.267246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.510 [2024-11-19 21:30:48.281792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.510 [2024-11-19 21:30:48.281833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.510 [2024-11-19 21:30:48.296232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.510 [2024-11-19 21:30:48.296265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.768 [2024-11-19 21:30:48.312403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.768 [2024-11-19 21:30:48.312442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.768 [2024-11-19 21:30:48.327596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.768 [2024-11-19 21:30:48.327636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.768 [2024-11-19 21:30:48.342361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.768 [2024-11-19 21:30:48.342401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.768 [2024-11-19 21:30:48.357534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.768 [2024-11-19 21:30:48.357573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.768 [2024-11-19 21:30:48.373286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.768 [2024-11-19 21:30:48.373319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.768 [2024-11-19 21:30:48.388724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.768 [2024-11-19 21:30:48.388763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.768 [2024-11-19 21:30:48.404362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.768 [2024-11-19 21:30:48.404401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.768 8303.00 IOPS, 64.87 MiB/s [2024-11-19T20:30:48.563Z] [2024-11-19 21:30:48.419990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.768 [2024-11-19 21:30:48.420029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.768 [2024-11-19 21:30:48.435897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.769 [2024-11-19 21:30:48.435936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.769 [2024-11-19 21:30:48.451541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.769 [2024-11-19 21:30:48.451580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.769 [2024-11-19 21:30:48.467543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.769 [2024-11-19 21:30:48.467583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.769 [2024-11-19 21:30:48.482789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.769 [2024-11-19 21:30:48.482829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.769 [2024-11-19 21:30:48.498423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.769 [2024-11-19 21:30:48.498462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.769 [2024-11-19 21:30:48.513937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.769 [2024-11-19 21:30:48.513971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.769 [2024-11-19 21:30:48.529041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.769 [2024-11-19 21:30:48.529106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.769 [2024-11-19 21:30:48.544684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.769 [2024-11-19 21:30:48.544726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.769 [2024-11-19 21:30:48.559925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.769 [2024-11-19 21:30:48.559961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.027 [2024-11-19 21:30:48.574322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.027 [2024-11-19 21:30:48.574357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.027 [2024-11-19 21:30:48.589109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.027 [2024-11-19 21:30:48.589145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.027 [2024-11-19 21:30:48.604062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.027 [2024-11-19 21:30:48.604105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.027 [2024-11-19 21:30:48.617682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.027 [2024-11-19 21:30:48.617715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.027 [2024-11-19 21:30:48.632495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.027 [2024-11-19 21:30:48.632529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.027 [2024-11-19 21:30:48.647211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.027 [2024-11-19 21:30:48.647246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.027 [2024-11-19 21:30:48.661451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.027 [2024-11-19 21:30:48.661483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.027 [2024-11-19 21:30:48.676142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.027 [2024-11-19 21:30:48.676178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.027 [2024-11-19 21:30:48.691199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.027 [2024-11-19 21:30:48.691234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.027 [2024-11-19 21:30:48.704971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.027 [2024-11-19 21:30:48.705003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.027 [2024-11-19 21:30:48.720463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.027 [2024-11-19 21:30:48.720496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.027 [2024-11-19 21:30:48.734856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.027 [2024-11-19 21:30:48.734888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.027 [2024-11-19 21:30:48.752958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.027 [2024-11-19 21:30:48.752991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.027 [2024-11-19 21:30:48.765235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.027 [2024-11-19 21:30:48.765269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.027 [2024-11-19 21:30:48.780771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.027 [2024-11-19 21:30:48.780805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.027 [2024-11-19 21:30:48.794312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.027 [2024-11-19 21:30:48.794347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.027 [2024-11-19 21:30:48.809548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.027 [2024-11-19 21:30:48.809582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.286 [2024-11-19 21:30:48.823944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.286 [2024-11-19 21:30:48.823978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.286 [2024-11-19 21:30:48.838079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.286 [2024-11-19 21:30:48.838112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.286 [2024-11-19 21:30:48.852331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.286 [2024-11-19 21:30:48.852365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.286 [2024-11-19 21:30:48.866755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.286 [2024-11-19 21:30:48.866787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.286 [2024-11-19 21:30:48.881365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.286 [2024-11-19 21:30:48.881413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.286 [2024-11-19 21:30:48.895600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.286 [2024-11-19 21:30:48.895634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.286 [2024-11-19 21:30:48.909903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.286 [2024-11-19 21:30:48.909937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.286 [2024-11-19 21:30:48.924252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.286 [2024-11-19 21:30:48.924286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.286 [2024-11-19 21:30:48.938403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.286 [2024-11-19 21:30:48.938435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.286 [2024-11-19 21:30:48.952680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.286 [2024-11-19 21:30:48.952713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.286 [2024-11-19 21:30:48.966528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.286 [2024-11-19 21:30:48.966561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.286 [2024-11-19 21:30:48.983049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.286 [2024-11-19 21:30:48.983109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.286 [2024-11-19 21:30:48.995445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.286 [2024-11-19 21:30:48.995477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.286 [2024-11-19 21:30:49.011192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.286 [2024-11-19 21:30:49.011226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.286 [2024-11-19 21:30:49.025685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.286 [2024-11-19 21:30:49.025717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.286 [2024-11-19 21:30:49.039798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.286 [2024-11-19 21:30:49.039831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.286 [2024-11-19 21:30:49.053560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.286 [2024-11-19 21:30:49.053592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.286 [2024-11-19 21:30:49.067370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.286 [2024-11-19 21:30:49.067418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.545 [2024-11-19 21:30:49.081850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.545 [2024-11-19 21:30:49.081884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.545 [2024-11-19 21:30:49.096681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.545 [2024-11-19 21:30:49.096714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.545 [2024-11-19 21:30:49.111662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.545 [2024-11-19 21:30:49.111695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.545 [2024-11-19 21:30:49.126363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.545 [2024-11-19 21:30:49.126396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.545 [2024-11-19 21:30:49.141173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.545 [2024-11-19 21:30:49.141207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.545 [2024-11-19 21:30:49.155832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.545 [2024-11-19 21:30:49.155865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.545 [2024-11-19 21:30:49.170177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.545 [2024-11-19 21:30:49.170225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.545 [2024-11-19 21:30:49.184247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.545 [2024-11-19 21:30:49.184282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.545 [2024-11-19 21:30:49.198158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.545 [2024-11-19 21:30:49.198192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.545 [2024-11-19 21:30:49.212746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.545 [2024-11-19 21:30:49.212779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.545 [2024-11-19 21:30:49.227490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.545 [2024-11-19 21:30:49.227523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.545 [2024-11-19 21:30:49.242158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.545 [2024-11-19 21:30:49.242192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.545 [2024-11-19 21:30:49.256317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.545 [2024-11-19 21:30:49.256368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.545 [2024-11-19 21:30:49.269844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.545 [2024-11-19 21:30:49.269876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.545 [2024-11-19 21:30:49.283061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.545 [2024-11-19 21:30:49.283120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.545 [2024-11-19 21:30:49.302820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.545 [2024-11-19 21:30:49.302856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.545 [2024-11-19 21:30:49.315179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.545 [2024-11-19 21:30:49.315214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.545 [2024-11-19 21:30:49.330229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.545 [2024-11-19 21:30:49.330265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.803 [2024-11-19 21:30:49.344951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.803 [2024-11-19 21:30:49.344985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.804 [2024-11-19 21:30:49.359865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.804 [2024-11-19 21:30:49.359898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.804 [2024-11-19 21:30:49.374291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.804 [2024-11-19 21:30:49.374327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.804 [2024-11-19 21:30:49.388769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.804 [2024-11-19 21:30:49.388802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.804 [2024-11-19 21:30:49.402993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.804 [2024-11-19 21:30:49.403028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.804 8550.00 IOPS, 66.80 MiB/s [2024-11-19T20:30:49.599Z] [2024-11-19 21:30:49.417062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.804 [2024-11-19 21:30:49.417103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.804 [2024-11-19 21:30:49.432318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.804 [2024-11-19 21:30:49.432367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.804 [2024-11-19 21:30:49.446519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.804 [2024-11-19 21:30:49.446564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.804 [2024-11-19 21:30:49.460157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.804 [2024-11-19 21:30:49.460191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.804 [2024-11-19 21:30:49.475185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.804 [2024-11-19 21:30:49.475220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.804 [2024-11-19 21:30:49.488825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.804 [2024-11-19 21:30:49.488857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.804 [2024-11-19 21:30:49.502604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.804 [2024-11-19 21:30:49.502637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.804 [2024-11-19 21:30:49.518429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.804 [2024-11-19 21:30:49.518463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.804 [2024-11-19 21:30:49.530486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.804 [2024-11-19 21:30:49.530518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.804 [2024-11-19 21:30:49.545749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.804 [2024-11-19 21:30:49.545784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.804 [2024-11-19 21:30:49.559473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.804 [2024-11-19 21:30:49.559506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.804 [2024-11-19 21:30:49.572959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.804 [2024-11-19 21:30:49.572994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.804 [2024-11-19 21:30:49.587613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.804 [2024-11-19 21:30:49.587654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.062 [2024-11-19 21:30:49.602297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.062 [2024-11-19 21:30:49.602331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.062 [2024-11-19 21:30:49.618171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.062 [2024-11-19 21:30:49.618205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.062 [2024-11-19 21:30:49.633727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.062 [2024-11-19 21:30:49.633766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.062 [2024-11-19 21:30:49.649389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.062 [2024-11-19 21:30:49.649429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.062 [2024-11-19 21:30:49.664996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.062 [2024-11-19 21:30:49.665030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.062 [2024-11-19 21:30:49.679337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.062 [2024-11-19 21:30:49.679386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.062 [2024-11-19 21:30:49.693567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.062 [2024-11-19 21:30:49.693600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.062 [2024-11-19 21:30:49.709143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.062 [2024-11-19 21:30:49.709176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.062 [2024-11-19 21:30:49.724625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.062 [2024-11-19 21:30:49.724677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.062 [2024-11-19 21:30:49.739437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.062 [2024-11-19 21:30:49.739469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.062 [2024-11-19 21:30:49.754453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.062 [2024-11-19 21:30:49.754492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.062 [2024-11-19 21:30:49.770251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.062 [2024-11-19 21:30:49.770284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.062 [2024-11-19 21:30:49.785913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.062 [2024-11-19 21:30:49.785952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.062 [2024-11-19 21:30:49.802616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.062 [2024-11-19 21:30:49.802655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.062 [2024-11-19 21:30:49.817985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.062 [2024-11-19 21:30:49.818024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.063 [2024-11-19 21:30:49.833961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.063 [2024-11-19 21:30:49.834001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.063 [2024-11-19 21:30:49.849250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.063 [2024-11-19 21:30:49.849284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.321 [2024-11-19 21:30:49.864725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.321 [2024-11-19 21:30:49.864764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.321 [2024-11-19 21:30:49.879835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.321 [2024-11-19 21:30:49.879874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.321 [2024-11-19 21:30:49.895491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.321 [2024-11-19 21:30:49.895530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.321 [2024-11-19 21:30:49.910847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.321 [2024-11-19 21:30:49.910885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.321 [2024-11-19 21:30:49.926616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.321 [2024-11-19 21:30:49.926654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.321 [2024-11-19 21:30:49.941279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.321 [2024-11-19 21:30:49.941311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.321 [2024-11-19 21:30:49.956880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.321 [2024-11-19 21:30:49.956919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.321 [2024-11-19 21:30:49.971757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.321 [2024-11-19 21:30:49.971795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.321 [2024-11-19 21:30:49.987153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.321 [2024-11-19 21:30:49.987186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.321 [2024-11-19 21:30:50.002208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.321 [2024-11-19 21:30:50.002242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.321 [2024-11-19 21:30:50.022429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.321 [2024-11-19 21:30:50.022484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.322 [2024-11-19 21:30:50.041946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.322 [2024-11-19 21:30:50.041990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.322 [2024-11-19 21:30:50.058183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.322 [2024-11-19 21:30:50.058218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.322 [2024-11-19 21:30:50.074417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.322 [2024-11-19 21:30:50.074457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.322 [2024-11-19 21:30:50.090208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.322 [2024-11-19 21:30:50.090241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.322 [2024-11-19 21:30:50.105894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.322 [2024-11-19 21:30:50.105933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.580 [2024-11-19 21:30:50.121681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.580 [2024-11-19 21:30:50.121721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.580 [2024-11-19 21:30:50.136939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.580 [2024-11-19 21:30:50.136978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.580 [2024-11-19 21:30:50.152661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.580 [2024-11-19 21:30:50.152700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.580 [2024-11-19 21:30:50.168125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.580 [2024-11-19 21:30:50.168161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.580 [2024-11-19 21:30:50.184633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.580 [2024-11-19 21:30:50.184672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.580 [2024-11-19 21:30:50.200452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.580 [2024-11-19 21:30:50.200486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.580 [2024-11-19 21:30:50.215807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.580 [2024-11-19 21:30:50.215846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.580 [2024-11-19 21:30:50.231900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.580 [2024-11-19 21:30:50.231938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.580 [2024-11-19 21:30:50.247276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.580 [2024-11-19 21:30:50.247314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.580 [2024-11-19 21:30:50.262415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.580 [2024-11-19 21:30:50.262455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.580 [2024-11-19 21:30:50.278827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.580 [2024-11-19 21:30:50.278866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.580 [2024-11-19 21:30:50.294559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.580 [2024-11-19 21:30:50.294598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.580 [2024-11-19 21:30:50.309876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.580 [2024-11-19 21:30:50.309914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.580 [2024-11-19 21:30:50.325136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.580 [2024-11-19 21:30:50.325187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.580 [2024-11-19 21:30:50.340326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.580 [2024-11-19 21:30:50.340365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.580 [2024-11-19 21:30:50.356184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.580 [2024-11-19 21:30:50.356217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.580 [2024-11-19 21:30:50.371220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.580 [2024-11-19 21:30:50.371255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.837 [2024-11-19 21:30:50.387587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.837 [2024-11-19 21:30:50.387626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.837 [2024-11-19 21:30:50.403046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.837 [2024-11-19 21:30:50.403094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.837 8462.67 IOPS, 66.11 MiB/s [2024-11-19T20:30:50.632Z] [2024-11-19 21:30:50.418241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.837 [2024-11-19 21:30:50.418274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.837 [2024-11-19 21:30:50.433780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.838 [2024-11-19 21:30:50.433820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.838 [2024-11-19 21:30:50.448884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.838 [2024-11-19 21:30:50.448922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.838 [2024-11-19 21:30:50.463724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.838 [2024-11-19 21:30:50.463756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.838 [2024-11-19 21:30:50.478244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.838 [2024-11-19 21:30:50.478279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.838 [2024-11-19 21:30:50.493397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.838 [2024-11-19 21:30:50.493435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.838 [2024-11-19 21:30:50.508482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.838 [2024-11-19 21:30:50.508514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.838 [2024-11-19 21:30:50.523800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.838 [2024-11-19 21:30:50.523832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.838 [2024-11-19 21:30:50.538770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.838 [2024-11-19 21:30:50.538808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.838 [2024-11-19 21:30:50.553975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.838 [2024-11-19 21:30:50.554014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.838 [2024-11-19 21:30:50.568446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.838 [2024-11-19 21:30:50.568486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.838 [2024-11-19 21:30:50.583697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.838 [2024-11-19 21:30:50.583736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.838 [2024-11-19 21:30:50.598141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.838 [2024-11-19 21:30:50.598185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.838 [2024-11-19 21:30:50.613873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.838 [2024-11-19 21:30:50.613913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.838 [2024-11-19 21:30:50.629214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.838 [2024-11-19 21:30:50.629246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.096 [2024-11-19 21:30:50.644176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.096 [2024-11-19 21:30:50.644209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.096 [2024-11-19 21:30:50.659172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.096 [2024-11-19 21:30:50.659204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.096 [2024-11-19 21:30:50.674184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.096 [2024-11-19 21:30:50.674222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.096 [2024-11-19 21:30:50.689611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.096 [2024-11-19 21:30:50.689644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.096 [2024-11-19 21:30:50.705136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.096 [2024-11-19 21:30:50.705176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.096 [2024-11-19 21:30:50.720545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.096 [2024-11-19 21:30:50.720585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.096 [2024-11-19 21:30:50.735915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.096 [2024-11-19 21:30:50.735948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.096 [2024-11-19 21:30:50.750884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.096 [2024-11-19 21:30:50.750923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.096 [2024-11-19 21:30:50.765603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.096 [2024-11-19 21:30:50.765643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.096 [2024-11-19 21:30:50.781638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.096 [2024-11-19 21:30:50.781678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.096 [2024-11-19 21:30:50.797143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.096 [2024-11-19 21:30:50.797196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.096 [2024-11-19 21:30:50.812999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.096 [2024-11-19 21:30:50.813038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.096 [2024-11-19 21:30:50.828845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.096 [2024-11-19 21:30:50.828884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.096 [2024-11-19 21:30:50.843408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.096 [2024-11-19 21:30:50.843447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.096 [2024-11-19 21:30:50.858580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.096 [2024-11-19 21:30:50.858620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.096 [2024-11-19 21:30:50.873879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.096 [2024-11-19 21:30:50.873917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.354 [2024-11-19 21:30:50.889893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.354 [2024-11-19 21:30:50.889943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.354 [2024-11-19 21:30:50.905800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.354 [2024-11-19 21:30:50.905841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.354 [2024-11-19 21:30:50.921255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.354 [2024-11-19 21:30:50.921289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.354 [2024-11-19 21:30:50.935466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.354 [2024-11-19 21:30:50.935505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.354 [2024-11-19 21:30:50.951410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.354 [2024-11-19 21:30:50.951449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.354 [2024-11-19 21:30:50.966450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.354 [2024-11-19 21:30:50.966497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.354 [2024-11-19 21:30:50.981661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.354 [2024-11-19 21:30:50.981700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.354 [2024-11-19 21:30:50.997239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.354 [2024-11-19 21:30:50.997274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.354 [2024-11-19 21:30:51.012163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.354 [2024-11-19 21:30:51.012196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.354 [2024-11-19 21:30:51.027902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.354 [2024-11-19 21:30:51.027940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.354 [2024-11-19 21:30:51.043497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.354 [2024-11-19 21:30:51.043536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.354 [2024-11-19 21:30:51.058998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.354 [2024-11-19 21:30:51.059037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.354 [2024-11-19 21:30:51.074285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.354 [2024-11-19 21:30:51.074318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.354 [2024-11-19 21:30:51.089415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.354 [2024-11-19 21:30:51.089454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.355 [2024-11-19 21:30:51.104530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.355 [2024-11-19 21:30:51.104568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.355 [2024-11-19 21:30:51.119903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.355 [2024-11-19 21:30:51.119942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.355 [2024-11-19 21:30:51.134648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.355 [2024-11-19 21:30:51.134689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.613 [2024-11-19 21:30:51.149257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.613 [2024-11-19 21:30:51.149293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.613 [2024-11-19 21:30:51.164903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.613 [2024-11-19 21:30:51.164942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.613 [2024-11-19 21:30:51.179265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.613 [2024-11-19 21:30:51.179312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.613 [2024-11-19 21:30:51.195960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.613 [2024-11-19 21:30:51.195995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.613 [2024-11-19 21:30:51.210281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.613 [2024-11-19 21:30:51.210315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.613 [2024-11-19 21:30:51.224632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.613 [2024-11-19 21:30:51.224665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.613 [2024-11-19 21:30:51.238585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.613 [2024-11-19 21:30:51.238617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.613 [2024-11-19 21:30:51.258310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.613 [2024-11-19 21:30:51.258361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.613 [2024-11-19 21:30:51.271966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.613 [2024-11-19 21:30:51.272008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.613 [2024-11-19 21:30:51.288615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.613 [2024-11-19 21:30:51.288654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.613 [2024-11-19 21:30:51.303257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.613 [2024-11-19 21:30:51.303291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.613 [2024-11-19 21:30:51.318856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.613 [2024-11-19 21:30:51.318895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.613 [2024-11-19 21:30:51.333979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.613 [2024-11-19 21:30:51.334018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.613 [2024-11-19 21:30:51.348777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.613 [2024-11-19 21:30:51.348815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.613 [2024-11-19 21:30:51.363767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.613 [2024-11-19 21:30:51.363806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.613 [2024-11-19 21:30:51.379153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.613 [2024-11-19 21:30:51.379186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.613 [2024-11-19 21:30:51.394696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.613 [2024-11-19 21:30:51.394729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.871 [2024-11-19 21:30:51.410296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.871 [2024-11-19 21:30:51.410330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.871 8440.75 IOPS, 65.94 MiB/s [2024-11-19T20:30:51.666Z] [2024-11-19 21:30:51.424871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.871 [2024-11-19 21:30:51.424910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.871 [2024-11-19 21:30:51.439425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.871 [2024-11-19 21:30:51.439465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.871 [2024-11-19 21:30:51.454750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.871 [2024-11-19 21:30:51.454788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.871 [2024-11-19 21:30:51.469709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.871 [2024-11-19 21:30:51.469747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.871 [2024-11-19 21:30:51.484034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.871 [2024-11-19 21:30:51.484081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.871 [2024-11-19 21:30:51.499193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.871 [2024-11-19 21:30:51.499225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.871 [2024-11-19 21:30:51.514574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.871 [2024-11-19 21:30:51.514613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.871 [2024-11-19 21:30:51.529507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.871 [2024-11-19 21:30:51.529545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.871 [2024-11-19 21:30:51.544534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.871 [2024-11-19 21:30:51.544573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.871 [2024-11-19 21:30:51.559509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.871 [2024-11-19 21:30:51.559548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.871 [2024-11-19 21:30:51.575492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.871 [2024-11-19 21:30:51.575531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.871 [2024-11-19 21:30:51.590691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.871 [2024-11-19 21:30:51.590730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.871 [2024-11-19 21:30:51.606141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.871 [2024-11-19 21:30:51.606175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.871 [2024-11-19 21:30:51.620710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.871 [2024-11-19 21:30:51.620743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.871 [2024-11-19 21:30:51.635523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.871 [2024-11-19 21:30:51.635562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:17.871 [2024-11-19 21:30:51.650677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:17.871 [2024-11-19 21:30:51.650716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.129 [2024-11-19 21:30:51.665574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.129 [2024-11-19 21:30:51.665607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.129 [2024-11-19 21:30:51.680674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.129 [2024-11-19 21:30:51.680706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.129 [2024-11-19 21:30:51.696282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.129 [2024-11-19 21:30:51.696315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.129 [2024-11-19 21:30:51.711665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.129 [2024-11-19 21:30:51.711704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.129 [2024-11-19 21:30:51.727042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.129 [2024-11-19 21:30:51.727090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.129 [2024-11-19 21:30:51.742760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.129 [2024-11-19 21:30:51.742800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.129 [2024-11-19 21:30:51.757861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.129 [2024-11-19 21:30:51.757899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.129 [2024-11-19 21:30:51.773039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.129 [2024-11-19 21:30:51.773087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.129 [2024-11-19 21:30:51.788289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.129 [2024-11-19 21:30:51.788322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.129 [2024-11-19 21:30:51.803120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.129 [2024-11-19 21:30:51.803153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.129 [2024-11-19 21:30:51.817695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.129 [2024-11-19 21:30:51.817733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.129 [2024-11-19 21:30:51.832151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.129 [2024-11-19 21:30:51.832184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.129 [2024-11-19 21:30:51.847670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.129 [2024-11-19 21:30:51.847710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.129 [2024-11-19 21:30:51.862426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.129 [2024-11-19 21:30:51.862458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.129 [2024-11-19 21:30:51.877407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.129 [2024-11-19 21:30:51.877461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.129 [2024-11-19 21:30:51.892529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.129 [2024-11-19 21:30:51.892567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.129 [2024-11-19 21:30:51.906259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.129 [2024-11-19 21:30:51.906292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.129 [2024-11-19 21:30:51.920909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.129 [2024-11-19 21:30:51.920942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.388 [2024-11-19 21:30:51.935431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.388 [2024-11-19 21:30:51.935470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.388 [2024-11-19 21:30:51.950133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.388 [2024-11-19 21:30:51.950173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.388 [2024-11-19 21:30:51.966192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.388 [2024-11-19 21:30:51.966226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.388 [2024-11-19 21:30:51.981994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.388 [2024-11-19 21:30:51.982034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.388 [2024-11-19 21:30:51.996677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.388 [2024-11-19 21:30:51.996716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.388 [2024-11-19 21:30:52.011428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.388 [2024-11-19 21:30:52.011467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.388 [2024-11-19 21:30:52.026803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.388 [2024-11-19 21:30:52.026842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.388 [2024-11-19 21:30:52.040881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.388 [2024-11-19 21:30:52.040921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.388 [2024-11-19 21:30:52.055794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.388 [2024-11-19 21:30:52.055826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.388 [2024-11-19 21:30:52.070397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.388 [2024-11-19 21:30:52.070436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.388 [2024-11-19 21:30:52.083893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.388 [2024-11-19 21:30:52.083932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.388 [2024-11-19 21:30:52.100034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.388 [2024-11-19 21:30:52.100090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.388 [2024-11-19 21:30:52.115117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.388 [2024-11-19 21:30:52.115166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.388 [2024-11-19 21:30:52.131206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.388 [2024-11-19 21:30:52.131240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.388 [2024-11-19 21:30:52.145812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.388 [2024-11-19 21:30:52.145853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.388 [2024-11-19 21:30:52.161040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.388 [2024-11-19 21:30:52.161092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.388 [2024-11-19 21:30:52.175851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.388 [2024-11-19 21:30:52.175883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.647 [2024-11-19 21:30:52.189976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.647 [2024-11-19 21:30:52.190014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.647 [2024-11-19 21:30:52.204761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.647 [2024-11-19 21:30:52.204800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.647 [2024-11-19 21:30:52.218581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.647 [2024-11-19 21:30:52.218620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.647 [2024-11-19 21:30:52.232165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.647 [2024-11-19 21:30:52.232199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.647 [2024-11-19 21:30:52.248167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.647 [2024-11-19 21:30:52.248198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.647 [2024-11-19 21:30:52.263362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.647 [2024-11-19 21:30:52.263394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.647 [2024-11-19 21:30:52.277827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.647 [2024-11-19 21:30:52.277869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.647 [2024-11-19 21:30:52.292392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.647 [2024-11-19 21:30:52.292439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.647 [2024-11-19 21:30:52.307355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.647 [2024-11-19 21:30:52.307387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.647 [2024-11-19 21:30:52.322154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.647 [2024-11-19 21:30:52.322187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.647 [2024-11-19 21:30:52.337743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.647 [2024-11-19 21:30:52.337782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.647 [2024-11-19 21:30:52.352896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.647 [2024-11-19 21:30:52.352934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.647 [2024-11-19 21:30:52.368267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.647 [2024-11-19 21:30:52.368300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.647 [2024-11-19 21:30:52.383452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.647 [2024-11-19 21:30:52.383484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.647 [2024-11-19 21:30:52.398143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.647 [2024-11-19 21:30:52.398176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.647 [2024-11-19 21:30:52.412434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.647 [2024-11-19 21:30:52.412474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.647 8458.20 IOPS, 66.08 MiB/s [2024-11-19T20:30:52.442Z] [2024-11-19 21:30:52.425904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.647 [2024-11-19 21:30:52.425943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.647 [2024-11-19 21:30:52.433731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.647 [2024-11-19 21:30:52.433772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.647 00:41:18.647 Latency(us) 00:41:18.647 [2024-11-19T20:30:52.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:18.647 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:41:18.647 Nvme1n1 : 5.02 8458.16 66.08 0.00 0.00 15107.34 4150.61 30874.74 00:41:18.647 [2024-11-19T20:30:52.442Z] =================================================================================================================== 00:41:18.647 [2024-11-19T20:30:52.442Z] Total : 8458.16 66.08 0.00 0.00 15107.34 4150.61 30874.74 00:41:18.647 [2024-11-19 21:30:52.437994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.647 [2024-11-19 21:30:52.438028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.446013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.446049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.454013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.454049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.462003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.462037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.470009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.470043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.478008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.478041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.486038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.486103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.494161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.494216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.501992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.502026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.510046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.510087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.518004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.518036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.525984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.526015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.534018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.534049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.542002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.542034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.549979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.550009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.558008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.558040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.566010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.566041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.573998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.574029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.582139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.582198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.590097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.590153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.598017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.598050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.606004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.606036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.613985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.614016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.622005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.622036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.629980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.630011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.638003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.638045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.646009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.646041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.654019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.654051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.662006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.906 [2024-11-19 21:30:52.662037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.906 [2024-11-19 21:30:52.670031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.907 [2024-11-19 21:30:52.670063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.907 [2024-11-19 21:30:52.678014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.907 [2024-11-19 21:30:52.678045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.907 [2024-11-19 21:30:52.686025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.907 [2024-11-19 21:30:52.686057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:18.907 [2024-11-19 21:30:52.693982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:18.907 [2024-11-19 21:30:52.694014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.165 [2024-11-19 21:30:52.702013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.165 [2024-11-19 21:30:52.702045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.165 [2024-11-19 21:30:52.710049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.165 [2024-11-19 21:30:52.710090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.165 [2024-11-19 21:30:52.717980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.165 [2024-11-19 21:30:52.718011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.165 [2024-11-19 21:30:52.726010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.165 [2024-11-19 21:30:52.726042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.734009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.734041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.741988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.742023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.750131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.750186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.762048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.762108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.770039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.770079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.778007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.778040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.786006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.786037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.794011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.794054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.802018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.802050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.810047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.810105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.818172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.818237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.826093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.826171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.834140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.834195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.842006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.842038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.849979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.850010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.858009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.858041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.866003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.866035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.873997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.874028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.882007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.882038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.889978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.890009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.898005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.898037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.906011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.906043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.913980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.914010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.922002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.922034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.930010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.930042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.937981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.938012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.946008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.946039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.166 [2024-11-19 21:30:52.953977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.166 [2024-11-19 21:30:52.954007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.424 [2024-11-19 21:30:52.962002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.424 [2024-11-19 21:30:52.962034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.424 [2024-11-19 21:30:52.970005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.424 [2024-11-19 21:30:52.970037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.424 [2024-11-19 21:30:52.978004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.424 [2024-11-19 21:30:52.978037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.424 [2024-11-19 21:30:52.985982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:52.986010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:52.994104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:52.994158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.002027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.002083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.010020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.010053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.017982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.018014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.026010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.026043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.034005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.034037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.041985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.042016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.050005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.050036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.058011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.058044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.065979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.066010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.074027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.074061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.081984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.082017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.090006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.090037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.098022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.098055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.105984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.106016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.114090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.114129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.122066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.122140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.129982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.130014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.138009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.138041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.145982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.146013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.154004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.154036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.162007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.162039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.170010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.170042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.178005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.178037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.186004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.186036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.193982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.194014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.202003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.202035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.210046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.210105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.425 [2024-11-19 21:30:53.218045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.425 [2024-11-19 21:30:53.218130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.684 [2024-11-19 21:30:53.226009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.684 [2024-11-19 21:30:53.226041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.684 [2024-11-19 21:30:53.233979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.684 [2024-11-19 21:30:53.234010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.684 [2024-11-19 21:30:53.242013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.684 [2024-11-19 21:30:53.242044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.684 [2024-11-19 21:30:53.250004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.684 [2024-11-19 21:30:53.250035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.684 [2024-11-19 21:30:53.257986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.684 [2024-11-19 21:30:53.258017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.684 [2024-11-19 21:30:53.266021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.684 [2024-11-19 21:30:53.266053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.684 [2024-11-19 21:30:53.273981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.684 [2024-11-19 21:30:53.274013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.684 [2024-11-19 21:30:53.282009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.684 [2024-11-19 21:30:53.282041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.684 [2024-11-19 21:30:53.290007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.684 [2024-11-19 21:30:53.290040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.684 [2024-11-19 21:30:53.297982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.684 [2024-11-19 21:30:53.298013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.684 [2024-11-19 21:30:53.306009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.684 [2024-11-19 21:30:53.306042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.684 [2024-11-19 21:30:53.314052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.684 [2024-11-19 21:30:53.314113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.684 [2024-11-19 21:30:53.321983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.684 [2024-11-19 21:30:53.322014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.684 [2024-11-19 21:30:53.330008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.684 [2024-11-19 21:30:53.330040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.684 [2024-11-19 21:30:53.337986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.684 [2024-11-19 21:30:53.338018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.684 [2024-11-19 21:30:53.346011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.684 [2024-11-19 21:30:53.346043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.684 [2024-11-19 21:30:53.354005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:19.684 [2024-11-19 21:30:53.354037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:19.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3212014) - No such process 00:41:19.684 21:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3212014 00:41:19.684 21:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:19.684 21:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.684 21:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:19.684 21:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.684 21:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:19.684 21:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.684 21:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:19.684 delay0 00:41:19.684 21:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.684 21:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:41:19.684 21:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.684 21:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:19.684 21:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.684 21:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:41:19.942 [2024-11-19 21:30:53.579248] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:41:28.054 Initializing NVMe Controllers 00:41:28.054 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:28.054 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:28.054 Initialization complete. Launching workers. 00:41:28.054 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 220, failed: 17238 00:41:28.054 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 17310, failed to submit 148 00:41:28.054 success 17252, unsuccessful 58, failed 0 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:28.054 rmmod nvme_tcp 00:41:28.054 rmmod nvme_fabrics 00:41:28.054 rmmod nvme_keyring 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3210425 ']' 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3210425 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3210425 ']' 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3210425 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3210425 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3210425' 00:41:28.054 killing process with pid 3210425 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3210425 00:41:28.054 21:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3210425 00:41:28.313 21:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:28.313 21:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:28.313 21:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:28.313 21:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:41:28.313 21:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:41:28.313 21:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:28.313 21:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:41:28.313 21:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:28.313 21:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:28.313 21:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:28.313 21:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:28.313 21:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:30.851 00:41:30.851 real 0m33.172s 00:41:30.851 user 0m47.642s 00:41:30.851 sys 0m10.187s 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:30.851 ************************************ 00:41:30.851 END TEST nvmf_zcopy 00:41:30.851 ************************************ 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:30.851 ************************************ 00:41:30.851 START TEST nvmf_nmic 00:41:30.851 ************************************ 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:30.851 * Looking for test storage... 00:41:30.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:30.851 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:30.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:30.852 --rc genhtml_branch_coverage=1 00:41:30.852 --rc genhtml_function_coverage=1 00:41:30.852 --rc genhtml_legend=1 00:41:30.852 --rc geninfo_all_blocks=1 00:41:30.852 --rc geninfo_unexecuted_blocks=1 00:41:30.852 00:41:30.852 ' 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:30.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:30.852 --rc genhtml_branch_coverage=1 00:41:30.852 --rc genhtml_function_coverage=1 00:41:30.852 --rc genhtml_legend=1 00:41:30.852 --rc geninfo_all_blocks=1 00:41:30.852 --rc geninfo_unexecuted_blocks=1 00:41:30.852 00:41:30.852 ' 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:30.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:30.852 --rc genhtml_branch_coverage=1 00:41:30.852 --rc genhtml_function_coverage=1 00:41:30.852 --rc genhtml_legend=1 00:41:30.852 --rc geninfo_all_blocks=1 00:41:30.852 --rc geninfo_unexecuted_blocks=1 00:41:30.852 00:41:30.852 ' 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:30.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:30.852 --rc genhtml_branch_coverage=1 00:41:30.852 --rc genhtml_function_coverage=1 00:41:30.852 --rc genhtml_legend=1 00:41:30.852 --rc geninfo_all_blocks=1 00:41:30.852 --rc geninfo_unexecuted_blocks=1 00:41:30.852 00:41:30.852 ' 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:41:30.852 21:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:32.841 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:32.841 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:32.841 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:32.841 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:32.841 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:32.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:32.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:41:32.842 00:41:32.842 --- 10.0.0.2 ping statistics --- 00:41:32.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:32.842 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:32.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:32.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:41:32.842 00:41:32.842 --- 10.0.0.1 ping statistics --- 00:41:32.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:32.842 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3215651 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3215651 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3215651 ']' 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:32.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:32.842 21:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:32.842 [2024-11-19 21:31:06.460132] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:32.842 [2024-11-19 21:31:06.462537] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:41:32.842 [2024-11-19 21:31:06.462648] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:32.842 [2024-11-19 21:31:06.614319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:33.100 [2024-11-19 21:31:06.756439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:33.100 [2024-11-19 21:31:06.756515] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:33.100 [2024-11-19 21:31:06.756542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:33.100 [2024-11-19 21:31:06.756564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:33.100 [2024-11-19 21:31:06.756586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:33.100 [2024-11-19 21:31:06.759299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:33.100 [2024-11-19 21:31:06.759368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:33.100 [2024-11-19 21:31:06.759458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:33.100 [2024-11-19 21:31:06.759468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:33.359 [2024-11-19 21:31:07.126548] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:33.359 [2024-11-19 21:31:07.138386] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:33.359 [2024-11-19 21:31:07.138555] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:33.359 [2024-11-19 21:31:07.139379] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:33.359 [2024-11-19 21:31:07.139730] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:33.927 [2024-11-19 21:31:07.440527] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:33.927 Malloc0 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:33.927 [2024-11-19 21:31:07.556783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:41:33.927 test case1: single bdev can't be used in multiple subsystems 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:33.927 [2024-11-19 21:31:07.580435] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:41:33.927 [2024-11-19 21:31:07.580496] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:41:33.927 [2024-11-19 21:31:07.580521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:33.927 request: 00:41:33.927 { 00:41:33.927 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:41:33.927 "namespace": { 00:41:33.927 "bdev_name": "Malloc0", 00:41:33.927 "no_auto_visible": false 00:41:33.927 }, 00:41:33.927 "method": "nvmf_subsystem_add_ns", 00:41:33.927 "req_id": 1 00:41:33.927 } 00:41:33.927 Got JSON-RPC error response 00:41:33.927 response: 00:41:33.927 { 00:41:33.927 "code": -32602, 00:41:33.927 "message": "Invalid parameters" 00:41:33.927 } 00:41:33.927 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:33.928 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:41:33.928 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:41:33.928 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:41:33.928 Adding namespace failed - expected result. 00:41:33.928 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:41:33.928 test case2: host connect to nvmf target in multiple paths 00:41:33.928 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:41:33.928 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.928 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:33.928 [2024-11-19 21:31:07.588518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:41:33.928 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.928 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:34.187 21:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:41:34.445 21:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:41:34.445 21:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:41:34.445 21:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:34.445 21:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:34.445 21:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:41:36.343 21:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:36.343 21:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:36.343 21:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:36.344 21:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:36.344 21:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:36.344 21:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:41:36.344 21:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:36.344 [global] 00:41:36.344 thread=1 00:41:36.344 invalidate=1 00:41:36.344 rw=write 00:41:36.344 time_based=1 00:41:36.344 runtime=1 00:41:36.344 ioengine=libaio 00:41:36.344 direct=1 00:41:36.344 bs=4096 00:41:36.344 iodepth=1 00:41:36.344 norandommap=0 00:41:36.344 numjobs=1 00:41:36.344 00:41:36.344 verify_dump=1 00:41:36.344 verify_backlog=512 00:41:36.344 verify_state_save=0 00:41:36.344 do_verify=1 00:41:36.344 verify=crc32c-intel 00:41:36.344 [job0] 00:41:36.344 filename=/dev/nvme0n1 00:41:36.344 Could not set queue depth (nvme0n1) 00:41:36.601 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:36.601 fio-3.35 00:41:36.601 Starting 1 thread 00:41:37.974 00:41:37.974 job0: (groupid=0, jobs=1): err= 0: pid=3216206: Tue Nov 19 21:31:11 2024 00:41:37.974 read: IOPS=21, BW=85.1KiB/s (87.1kB/s)(88.0KiB/1034msec) 00:41:37.974 slat (nsec): min=8937, max=36010, avg=18653.32, stdev=9720.78 00:41:37.974 clat (usec): min=40516, max=41013, avg=40947.84, stdev=101.62 00:41:37.974 lat (usec): min=40525, max=41028, avg=40966.50, stdev=102.64 00:41:37.974 clat percentiles (usec): 00:41:37.974 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:41:37.974 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:37.974 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:37.974 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:37.974 | 99.99th=[41157] 00:41:37.974 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:41:37.974 slat (nsec): min=7071, max=55692, avg=18015.85, stdev=7087.86 00:41:37.974 clat (usec): min=187, max=1377, avg=235.64, stdev=54.66 00:41:37.974 lat (usec): min=196, max=1394, avg=253.66, stdev=55.64 00:41:37.974 clat percentiles (usec): 00:41:37.974 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 221], 00:41:37.974 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 231], 60.00th=[ 237], 00:41:37.974 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 258], 95.00th=[ 265], 00:41:37.974 | 99.00th=[ 281], 99.50th=[ 351], 99.90th=[ 1385], 99.95th=[ 1385], 00:41:37.974 | 99.99th=[ 1385] 00:41:37.974 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:41:37.974 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:37.974 lat (usec) : 250=74.72%, 500=20.97% 00:41:37.974 lat (msec) : 2=0.19%, 50=4.12% 00:41:37.974 cpu : usr=0.87%, sys=0.87%, ctx=534, majf=0, minf=1 00:41:37.974 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:37.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.974 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:37.974 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:37.974 00:41:37.974 Run status group 0 (all jobs): 00:41:37.974 READ: bw=85.1KiB/s (87.1kB/s), 85.1KiB/s-85.1KiB/s (87.1kB/s-87.1kB/s), io=88.0KiB (90.1kB), run=1034-1034msec 00:41:37.974 WRITE: bw=1981KiB/s (2028kB/s), 1981KiB/s-1981KiB/s (2028kB/s-2028kB/s), io=2048KiB (2097kB), run=1034-1034msec 00:41:37.974 00:41:37.974 Disk stats (read/write): 00:41:37.974 nvme0n1: ios=68/512, merge=0/0, ticks=769/110, in_queue=879, util=91.98% 00:41:37.974 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:37.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:41:37.974 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:37.974 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:41:37.974 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:37.974 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:37.974 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:37.975 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:38.234 rmmod nvme_tcp 00:41:38.234 rmmod nvme_fabrics 00:41:38.234 rmmod nvme_keyring 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3215651 ']' 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3215651 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3215651 ']' 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3215651 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3215651 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3215651' 00:41:38.234 killing process with pid 3215651 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3215651 00:41:38.234 21:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3215651 00:41:39.608 21:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:39.608 21:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:39.608 21:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:39.608 21:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:41:39.608 21:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:41:39.608 21:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:39.608 21:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:41:39.608 21:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:39.608 21:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:39.608 21:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:39.608 21:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:39.608 21:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:41.511 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:41.511 00:41:41.511 real 0m11.154s 00:41:41.511 user 0m19.412s 00:41:41.511 sys 0m3.594s 00:41:41.511 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:41.511 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:41.511 ************************************ 00:41:41.511 END TEST nvmf_nmic 00:41:41.511 ************************************ 00:41:41.511 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:41.511 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:41.511 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:41.511 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:41.771 ************************************ 00:41:41.771 START TEST nvmf_fio_target 00:41:41.771 ************************************ 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:41.771 * Looking for test storage... 00:41:41.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:41.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:41.771 --rc genhtml_branch_coverage=1 00:41:41.771 --rc genhtml_function_coverage=1 00:41:41.771 --rc genhtml_legend=1 00:41:41.771 --rc geninfo_all_blocks=1 00:41:41.771 --rc geninfo_unexecuted_blocks=1 00:41:41.771 00:41:41.771 ' 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:41.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:41.771 --rc genhtml_branch_coverage=1 00:41:41.771 --rc genhtml_function_coverage=1 00:41:41.771 --rc genhtml_legend=1 00:41:41.771 --rc geninfo_all_blocks=1 00:41:41.771 --rc geninfo_unexecuted_blocks=1 00:41:41.771 00:41:41.771 ' 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:41.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:41.771 --rc genhtml_branch_coverage=1 00:41:41.771 --rc genhtml_function_coverage=1 00:41:41.771 --rc genhtml_legend=1 00:41:41.771 --rc geninfo_all_blocks=1 00:41:41.771 --rc geninfo_unexecuted_blocks=1 00:41:41.771 00:41:41.771 ' 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:41.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:41.771 --rc genhtml_branch_coverage=1 00:41:41.771 --rc genhtml_function_coverage=1 00:41:41.771 --rc genhtml_legend=1 00:41:41.771 --rc geninfo_all_blocks=1 00:41:41.771 --rc geninfo_unexecuted_blocks=1 00:41:41.771 00:41:41.771 ' 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:41.771 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:41:41.772 21:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:43.676 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:43.676 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:43.676 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:43.676 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:43.677 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:43.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:43.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:41:43.677 00:41:43.677 --- 10.0.0.2 ping statistics --- 00:41:43.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:43.677 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:41:43.677 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:43.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:43.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:41:43.936 00:41:43.936 --- 10.0.0.1 ping statistics --- 00:41:43.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:43.936 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3218485 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3218485 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3218485 ']' 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:43.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:43.936 21:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:43.936 [2024-11-19 21:31:17.604094] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:43.936 [2024-11-19 21:31:17.607236] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:41:43.936 [2024-11-19 21:31:17.607358] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:44.194 [2024-11-19 21:31:17.773235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:44.194 [2024-11-19 21:31:17.918745] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:44.195 [2024-11-19 21:31:17.918824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:44.195 [2024-11-19 21:31:17.918852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:44.195 [2024-11-19 21:31:17.918874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:44.195 [2024-11-19 21:31:17.918897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:44.195 [2024-11-19 21:31:17.922018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:44.195 [2024-11-19 21:31:17.922089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:44.195 [2024-11-19 21:31:17.922135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:44.195 [2024-11-19 21:31:17.922146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:44.763 [2024-11-19 21:31:18.295956] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:44.763 [2024-11-19 21:31:18.305417] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:44.763 [2024-11-19 21:31:18.305607] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:44.763 [2024-11-19 21:31:18.306426] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:44.763 [2024-11-19 21:31:18.306770] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:45.022 21:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:45.022 21:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:41:45.022 21:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:45.022 21:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:45.022 21:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:45.022 21:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:45.022 21:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:45.281 [2024-11-19 21:31:18.919367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:45.281 21:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:45.540 21:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:41:45.540 21:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:46.107 21:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:41:46.107 21:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:46.365 21:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:41:46.365 21:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:46.931 21:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:41:46.931 21:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:41:47.189 21:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:47.447 21:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:41:47.447 21:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:47.705 21:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:41:47.705 21:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:48.272 21:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:41:48.272 21:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:41:48.530 21:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:48.788 21:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:48.788 21:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:49.046 21:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:49.046 21:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:41:49.304 21:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:49.562 [2024-11-19 21:31:23.219565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:49.562 21:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:41:49.820 21:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:41:50.079 21:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:50.337 21:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:41:50.337 21:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:41:50.337 21:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:50.337 21:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:41:50.337 21:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:41:50.337 21:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:41:52.235 21:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:52.235 21:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:52.235 21:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:52.235 21:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:41:52.235 21:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:52.235 21:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:41:52.235 21:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:52.493 [global] 00:41:52.493 thread=1 00:41:52.493 invalidate=1 00:41:52.493 rw=write 00:41:52.493 time_based=1 00:41:52.493 runtime=1 00:41:52.493 ioengine=libaio 00:41:52.493 direct=1 00:41:52.493 bs=4096 00:41:52.493 iodepth=1 00:41:52.493 norandommap=0 00:41:52.493 numjobs=1 00:41:52.493 00:41:52.493 verify_dump=1 00:41:52.493 verify_backlog=512 00:41:52.493 verify_state_save=0 00:41:52.493 do_verify=1 00:41:52.493 verify=crc32c-intel 00:41:52.493 [job0] 00:41:52.493 filename=/dev/nvme0n1 00:41:52.493 [job1] 00:41:52.493 filename=/dev/nvme0n2 00:41:52.493 [job2] 00:41:52.493 filename=/dev/nvme0n3 00:41:52.493 [job3] 00:41:52.493 filename=/dev/nvme0n4 00:41:52.493 Could not set queue depth (nvme0n1) 00:41:52.493 Could not set queue depth (nvme0n2) 00:41:52.493 Could not set queue depth (nvme0n3) 00:41:52.493 Could not set queue depth (nvme0n4) 00:41:52.493 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:52.493 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:52.493 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:52.493 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:52.493 fio-3.35 00:41:52.493 Starting 4 threads 00:41:53.868 00:41:53.868 job0: (groupid=0, jobs=1): err= 0: pid=3219563: Tue Nov 19 21:31:27 2024 00:41:53.868 read: IOPS=19, BW=79.4KiB/s (81.3kB/s)(80.0KiB/1008msec) 00:41:53.868 slat (nsec): min=8395, max=39039, avg=30192.45, stdev=10189.73 00:41:53.868 clat (usec): min=40835, max=41228, avg=40972.99, stdev=90.56 00:41:53.868 lat (usec): min=40871, max=41236, avg=41003.19, stdev=85.34 00:41:53.868 clat percentiles (usec): 00:41:53.868 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:41:53.868 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:53.868 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:53.868 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:53.868 | 99.99th=[41157] 00:41:53.868 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:41:53.868 slat (nsec): min=8485, max=32676, avg=12939.18, stdev=5327.99 00:41:53.868 clat (usec): min=194, max=1245, avg=345.92, stdev=90.52 00:41:53.868 lat (usec): min=207, max=1255, avg=358.86, stdev=89.71 00:41:53.868 clat percentiles (usec): 00:41:53.868 | 1.00th=[ 223], 5.00th=[ 249], 10.00th=[ 260], 20.00th=[ 285], 00:41:53.868 | 30.00th=[ 297], 40.00th=[ 322], 50.00th=[ 338], 60.00th=[ 363], 00:41:53.868 | 70.00th=[ 371], 80.00th=[ 392], 90.00th=[ 416], 95.00th=[ 457], 00:41:53.868 | 99.00th=[ 709], 99.50th=[ 930], 99.90th=[ 1254], 99.95th=[ 1254], 00:41:53.868 | 99.99th=[ 1254] 00:41:53.868 bw ( KiB/s): min= 4087, max= 4087, per=34.11%, avg=4087.00, stdev= 0.00, samples=1 00:41:53.868 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:41:53.868 lat (usec) : 250=4.89%, 500=88.91%, 750=1.50%, 1000=0.75% 00:41:53.868 lat (msec) : 2=0.19%, 50=3.76% 00:41:53.868 cpu : usr=0.20%, sys=1.09%, ctx=534, majf=0, minf=1 00:41:53.868 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:53.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.868 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:53.868 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:53.868 job1: (groupid=0, jobs=1): err= 0: pid=3219564: Tue Nov 19 21:31:27 2024 00:41:53.868 read: IOPS=95, BW=382KiB/s (392kB/s)(384KiB/1004msec) 00:41:53.868 slat (nsec): min=8135, max=42023, avg=23025.77, stdev=6828.67 00:41:53.868 clat (usec): min=318, max=41444, avg=8448.71, stdev=16257.67 00:41:53.868 lat (usec): min=338, max=41461, avg=8471.73, stdev=16259.05 00:41:53.868 clat percentiles (usec): 00:41:53.868 | 1.00th=[ 318], 5.00th=[ 338], 10.00th=[ 363], 20.00th=[ 404], 00:41:53.868 | 30.00th=[ 412], 40.00th=[ 416], 50.00th=[ 429], 60.00th=[ 433], 00:41:53.868 | 70.00th=[ 449], 80.00th=[ 545], 90.00th=[41157], 95.00th=[41157], 00:41:53.868 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:41:53.868 | 99.99th=[41681] 00:41:53.868 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:41:53.868 slat (nsec): min=7773, max=42584, avg=13860.13, stdev=5542.43 00:41:53.868 clat (usec): min=217, max=623, avg=346.57, stdev=60.52 00:41:53.868 lat (usec): min=227, max=639, avg=360.43, stdev=59.92 00:41:53.868 clat percentiles (usec): 00:41:53.868 | 1.00th=[ 223], 5.00th=[ 237], 10.00th=[ 265], 20.00th=[ 297], 00:41:53.868 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 347], 60.00th=[ 363], 00:41:53.868 | 70.00th=[ 371], 80.00th=[ 400], 90.00th=[ 416], 95.00th=[ 445], 00:41:53.868 | 99.00th=[ 502], 99.50th=[ 545], 99.90th=[ 627], 99.95th=[ 627], 00:41:53.868 | 99.99th=[ 627] 00:41:53.868 bw ( KiB/s): min= 4087, max= 4087, per=34.11%, avg=4087.00, stdev= 0.00, samples=1 00:41:53.868 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:41:53.868 lat (usec) : 250=6.74%, 500=88.98%, 750=1.15% 00:41:53.868 lat (msec) : 50=3.12% 00:41:53.868 cpu : usr=0.50%, sys=1.30%, ctx=610, majf=0, minf=1 00:41:53.868 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:53.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.868 issued rwts: total=96,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:53.868 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:53.868 job2: (groupid=0, jobs=1): err= 0: pid=3219565: Tue Nov 19 21:31:27 2024 00:41:53.868 read: IOPS=769, BW=3080KiB/s (3154kB/s)(3120KiB/1013msec) 00:41:53.868 slat (nsec): min=4565, max=69724, avg=16867.33, stdev=11083.73 00:41:53.868 clat (usec): min=213, max=41495, avg=866.07, stdev=4589.13 00:41:53.868 lat (usec): min=219, max=41508, avg=882.94, stdev=4590.06 00:41:53.868 clat percentiles (usec): 00:41:53.868 | 1.00th=[ 233], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 277], 00:41:53.868 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 371], 00:41:53.868 | 70.00th=[ 379], 80.00th=[ 388], 90.00th=[ 424], 95.00th=[ 506], 00:41:53.868 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:41:53.868 | 99.99th=[41681] 00:41:53.868 write: IOPS=1010, BW=4043KiB/s (4140kB/s)(4096KiB/1013msec); 0 zone resets 00:41:53.868 slat (nsec): min=6703, max=49003, avg=16929.69, stdev=5064.82 00:41:53.868 clat (usec): min=186, max=1214, avg=288.52, stdev=88.97 00:41:53.868 lat (usec): min=202, max=1224, avg=305.45, stdev=87.53 00:41:53.868 clat percentiles (usec): 00:41:53.868 | 1.00th=[ 194], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 231], 00:41:53.868 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 249], 60.00th=[ 281], 00:41:53.868 | 70.00th=[ 322], 80.00th=[ 359], 90.00th=[ 400], 95.00th=[ 441], 00:41:53.868 | 99.00th=[ 510], 99.50th=[ 545], 99.90th=[ 1074], 99.95th=[ 1221], 00:41:53.868 | 99.99th=[ 1221] 00:41:53.868 bw ( KiB/s): min= 3616, max= 4566, per=34.15%, avg=4091.00, stdev=671.75, samples=2 00:41:53.868 iops : min= 904, max= 1141, avg=1022.50, stdev=167.58, samples=2 00:41:53.868 lat (usec) : 250=30.32%, 500=66.69%, 750=2.22%, 1000=0.11% 00:41:53.868 lat (msec) : 2=0.11%, 50=0.55% 00:41:53.868 cpu : usr=1.78%, sys=2.87%, ctx=1808, majf=0, minf=1 00:41:53.868 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:53.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.868 issued rwts: total=780,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:53.868 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:53.868 job3: (groupid=0, jobs=1): err= 0: pid=3219566: Tue Nov 19 21:31:27 2024 00:41:53.868 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:41:53.868 slat (nsec): min=6811, max=52238, avg=23798.20, stdev=10459.99 00:41:53.868 clat (usec): min=246, max=41239, avg=1316.90, stdev=5882.68 00:41:53.868 lat (usec): min=259, max=41259, avg=1340.70, stdev=5882.77 00:41:53.868 clat percentiles (usec): 00:41:53.868 | 1.00th=[ 265], 5.00th=[ 289], 10.00th=[ 338], 20.00th=[ 388], 00:41:53.868 | 30.00th=[ 420], 40.00th=[ 449], 50.00th=[ 465], 60.00th=[ 478], 00:41:53.868 | 70.00th=[ 490], 80.00th=[ 502], 90.00th=[ 529], 95.00th=[ 570], 00:41:53.868 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:53.868 | 99.99th=[41157] 00:41:53.868 write: IOPS=985, BW=3940KiB/s (4035kB/s)(3944KiB/1001msec); 0 zone resets 00:41:53.868 slat (nsec): min=6834, max=58560, avg=17318.40, stdev=8173.70 00:41:53.868 clat (usec): min=196, max=605, avg=290.37, stdev=76.65 00:41:53.868 lat (usec): min=213, max=615, avg=307.69, stdev=74.21 00:41:53.868 clat percentiles (usec): 00:41:53.868 | 1.00th=[ 204], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 217], 00:41:53.868 | 30.00th=[ 225], 40.00th=[ 237], 50.00th=[ 273], 60.00th=[ 306], 00:41:53.868 | 70.00th=[ 343], 80.00th=[ 367], 90.00th=[ 396], 95.00th=[ 420], 00:41:53.868 | 99.00th=[ 482], 99.50th=[ 506], 99.90th=[ 603], 99.95th=[ 603], 00:41:53.868 | 99.99th=[ 603] 00:41:53.868 bw ( KiB/s): min= 4087, max= 4087, per=34.11%, avg=4087.00, stdev= 0.00, samples=1 00:41:53.868 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:41:53.868 lat (usec) : 250=29.24%, 500=62.82%, 750=7.21% 00:41:53.868 lat (msec) : 50=0.73% 00:41:53.868 cpu : usr=1.80%, sys=2.60%, ctx=1500, majf=0, minf=1 00:41:53.868 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:53.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.868 issued rwts: total=512,986,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:53.868 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:53.868 00:41:53.868 Run status group 0 (all jobs): 00:41:53.868 READ: bw=5560KiB/s (5693kB/s), 79.4KiB/s-3080KiB/s (81.3kB/s-3154kB/s), io=5632KiB (5767kB), run=1001-1013msec 00:41:53.868 WRITE: bw=11.7MiB/s (12.3MB/s), 2032KiB/s-4043KiB/s (2081kB/s-4140kB/s), io=11.9MiB (12.4MB), run=1001-1013msec 00:41:53.868 00:41:53.868 Disk stats (read/write): 00:41:53.868 nvme0n1: ios=39/512, merge=0/0, ticks=1519/175, in_queue=1694, util=85.67% 00:41:53.868 nvme0n2: ios=140/512, merge=0/0, ticks=719/170, in_queue=889, util=91.56% 00:41:53.868 nvme0n3: ios=833/1024, merge=0/0, ticks=1358/292, in_queue=1650, util=93.43% 00:41:53.868 nvme0n4: ios=535/512, merge=0/0, ticks=1535/171, in_queue=1706, util=94.12% 00:41:53.868 21:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:41:53.868 [global] 00:41:53.868 thread=1 00:41:53.868 invalidate=1 00:41:53.868 rw=randwrite 00:41:53.868 time_based=1 00:41:53.868 runtime=1 00:41:53.869 ioengine=libaio 00:41:53.869 direct=1 00:41:53.869 bs=4096 00:41:53.869 iodepth=1 00:41:53.869 norandommap=0 00:41:53.869 numjobs=1 00:41:53.869 00:41:53.869 verify_dump=1 00:41:53.869 verify_backlog=512 00:41:53.869 verify_state_save=0 00:41:53.869 do_verify=1 00:41:53.869 verify=crc32c-intel 00:41:53.869 [job0] 00:41:53.869 filename=/dev/nvme0n1 00:41:53.869 [job1] 00:41:53.869 filename=/dev/nvme0n2 00:41:53.869 [job2] 00:41:53.869 filename=/dev/nvme0n3 00:41:53.869 [job3] 00:41:53.869 filename=/dev/nvme0n4 00:41:53.869 Could not set queue depth (nvme0n1) 00:41:53.869 Could not set queue depth (nvme0n2) 00:41:53.869 Could not set queue depth (nvme0n3) 00:41:53.869 Could not set queue depth (nvme0n4) 00:41:54.127 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:54.127 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:54.127 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:54.127 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:54.127 fio-3.35 00:41:54.127 Starting 4 threads 00:41:55.504 00:41:55.504 job0: (groupid=0, jobs=1): err= 0: pid=3219909: Tue Nov 19 21:31:28 2024 00:41:55.504 read: IOPS=1436, BW=5746KiB/s (5884kB/s)(5752KiB/1001msec) 00:41:55.504 slat (nsec): min=5834, max=40934, avg=8226.46, stdev=3331.42 00:41:55.504 clat (usec): min=266, max=41038, avg=383.46, stdev=1077.37 00:41:55.504 lat (usec): min=273, max=41044, avg=391.68, stdev=1077.39 00:41:55.504 clat percentiles (usec): 00:41:55.504 | 1.00th=[ 273], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 297], 00:41:55.504 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 330], 00:41:55.504 | 70.00th=[ 347], 80.00th=[ 404], 90.00th=[ 494], 95.00th=[ 529], 00:41:55.504 | 99.00th=[ 791], 99.50th=[ 955], 99.90th=[ 1188], 99.95th=[41157], 00:41:55.504 | 99.99th=[41157] 00:41:55.504 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:41:55.504 slat (nsec): min=6784, max=42245, avg=10022.05, stdev=3535.02 00:41:55.504 clat (usec): min=171, max=800, avg=269.42, stdev=76.15 00:41:55.504 lat (usec): min=179, max=809, avg=279.44, stdev=76.68 00:41:55.504 clat percentiles (usec): 00:41:55.504 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 202], 00:41:55.504 | 30.00th=[ 210], 40.00th=[ 231], 50.00th=[ 253], 60.00th=[ 269], 00:41:55.504 | 70.00th=[ 297], 80.00th=[ 330], 90.00th=[ 388], 95.00th=[ 404], 00:41:55.504 | 99.00th=[ 469], 99.50th=[ 529], 99.90th=[ 783], 99.95th=[ 799], 00:41:55.504 | 99.99th=[ 799] 00:41:55.504 bw ( KiB/s): min= 7528, max= 7528, per=53.93%, avg=7528.00, stdev= 0.00, samples=1 00:41:55.504 iops : min= 1882, max= 1882, avg=1882.00, stdev= 0.00, samples=1 00:41:55.504 lat (usec) : 250=24.04%, 500=71.28%, 750=4.07%, 1000=0.40% 00:41:55.504 lat (msec) : 2=0.17%, 50=0.03% 00:41:55.504 cpu : usr=2.60%, sys=2.80%, ctx=2976, majf=0, minf=1 00:41:55.504 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:55.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.504 issued rwts: total=1438,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:55.504 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:55.504 job1: (groupid=0, jobs=1): err= 0: pid=3219910: Tue Nov 19 21:31:28 2024 00:41:55.504 read: IOPS=679, BW=2717KiB/s (2782kB/s)(2720KiB/1001msec) 00:41:55.504 slat (nsec): min=4471, max=36033, avg=12041.31, stdev=3786.79 00:41:55.504 clat (usec): min=201, max=42503, avg=1063.53, stdev=5133.23 00:41:55.504 lat (usec): min=214, max=42516, avg=1075.57, stdev=5133.51 00:41:55.504 clat percentiles (usec): 00:41:55.504 | 1.00th=[ 227], 5.00th=[ 277], 10.00th=[ 310], 20.00th=[ 367], 00:41:55.504 | 30.00th=[ 379], 40.00th=[ 388], 50.00th=[ 396], 60.00th=[ 412], 00:41:55.504 | 70.00th=[ 445], 80.00th=[ 469], 90.00th=[ 515], 95.00th=[ 553], 00:41:55.504 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:41:55.504 | 99.99th=[42730] 00:41:55.504 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:41:55.504 slat (nsec): min=5801, max=37717, avg=8824.51, stdev=3908.36 00:41:55.504 clat (usec): min=165, max=440, avg=248.95, stdev=46.97 00:41:55.504 lat (usec): min=177, max=446, avg=257.77, stdev=46.96 00:41:55.504 clat percentiles (usec): 00:41:55.504 | 1.00th=[ 176], 5.00th=[ 188], 10.00th=[ 204], 20.00th=[ 217], 00:41:55.504 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 249], 00:41:55.504 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 302], 95.00th=[ 375], 00:41:55.504 | 99.00th=[ 383], 99.50th=[ 392], 99.90th=[ 437], 99.95th=[ 441], 00:41:55.504 | 99.99th=[ 441] 00:41:55.504 bw ( KiB/s): min= 5984, max= 5984, per=42.87%, avg=5984.00, stdev= 0.00, samples=1 00:41:55.504 iops : min= 1496, max= 1496, avg=1496.00, stdev= 0.00, samples=1 00:41:55.504 lat (usec) : 250=39.14%, 500=55.34%, 750=4.87% 00:41:55.504 lat (msec) : 50=0.65% 00:41:55.504 cpu : usr=0.50%, sys=2.10%, ctx=1705, majf=0, minf=2 00:41:55.504 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:55.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.504 issued rwts: total=680,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:55.504 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:55.504 job2: (groupid=0, jobs=1): err= 0: pid=3219911: Tue Nov 19 21:31:28 2024 00:41:55.504 read: IOPS=205, BW=823KiB/s (843kB/s)(824KiB/1001msec) 00:41:55.504 slat (nsec): min=6319, max=28355, avg=8759.37, stdev=3574.24 00:41:55.504 clat (usec): min=289, max=41175, avg=3981.96, stdev=11473.22 00:41:55.504 lat (usec): min=295, max=41181, avg=3990.72, stdev=11475.52 00:41:55.504 clat percentiles (usec): 00:41:55.504 | 1.00th=[ 297], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 343], 00:41:55.504 | 30.00th=[ 371], 40.00th=[ 408], 50.00th=[ 486], 60.00th=[ 502], 00:41:55.504 | 70.00th=[ 515], 80.00th=[ 529], 90.00th=[ 693], 95.00th=[41157], 00:41:55.504 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:55.504 | 99.99th=[41157] 00:41:55.504 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:41:55.504 slat (nsec): min=7785, max=39520, avg=13277.29, stdev=5383.33 00:41:55.504 clat (usec): min=209, max=716, avg=330.16, stdev=79.70 00:41:55.504 lat (usec): min=218, max=729, avg=343.43, stdev=80.33 00:41:55.504 clat percentiles (usec): 00:41:55.504 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 247], 00:41:55.504 | 30.00th=[ 265], 40.00th=[ 289], 50.00th=[ 330], 60.00th=[ 355], 00:41:55.504 | 70.00th=[ 396], 80.00th=[ 404], 90.00th=[ 412], 95.00th=[ 445], 00:41:55.504 | 99.00th=[ 498], 99.50th=[ 652], 99.90th=[ 717], 99.95th=[ 717], 00:41:55.504 | 99.99th=[ 717] 00:41:55.504 bw ( KiB/s): min= 4096, max= 4096, per=29.34%, avg=4096.00, stdev= 0.00, samples=1 00:41:55.504 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:55.504 lat (usec) : 250=15.32%, 500=72.14%, 750=9.75%, 1000=0.28% 00:41:55.504 lat (msec) : 50=2.51% 00:41:55.504 cpu : usr=0.60%, sys=1.00%, ctx=719, majf=0, minf=1 00:41:55.504 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:55.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.504 issued rwts: total=206,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:55.504 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:55.504 job3: (groupid=0, jobs=1): err= 0: pid=3219912: Tue Nov 19 21:31:28 2024 00:41:55.504 read: IOPS=97, BW=389KiB/s (399kB/s)(400KiB/1027msec) 00:41:55.504 slat (nsec): min=6161, max=29091, avg=11239.90, stdev=4326.81 00:41:55.504 clat (usec): min=289, max=41096, avg=8851.22, stdev=16410.93 00:41:55.504 lat (usec): min=303, max=41104, avg=8862.46, stdev=16411.99 00:41:55.504 clat percentiles (usec): 00:41:55.504 | 1.00th=[ 289], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 326], 00:41:55.504 | 30.00th=[ 338], 40.00th=[ 343], 50.00th=[ 379], 60.00th=[ 388], 00:41:55.504 | 70.00th=[ 441], 80.00th=[30016], 90.00th=[41157], 95.00th=[41157], 00:41:55.504 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:55.504 | 99.99th=[41157] 00:41:55.504 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:41:55.504 slat (nsec): min=10885, max=33655, avg=13480.48, stdev=4134.75 00:41:55.504 clat (usec): min=210, max=392, avg=257.44, stdev=22.25 00:41:55.504 lat (usec): min=222, max=404, avg=270.92, stdev=23.05 00:41:55.504 clat percentiles (usec): 00:41:55.504 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 241], 00:41:55.504 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 262], 00:41:55.504 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 297], 00:41:55.504 | 99.00th=[ 322], 99.50th=[ 371], 99.90th=[ 392], 99.95th=[ 392], 00:41:55.505 | 99.99th=[ 392] 00:41:55.505 bw ( KiB/s): min= 4096, max= 4096, per=29.34%, avg=4096.00, stdev= 0.00, samples=1 00:41:55.505 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:55.505 lat (usec) : 250=30.88%, 500=64.54%, 750=0.49%, 1000=0.49% 00:41:55.505 lat (msec) : 10=0.16%, 50=3.43% 00:41:55.505 cpu : usr=0.19%, sys=0.88%, ctx=615, majf=0, minf=1 00:41:55.505 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:55.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.505 issued rwts: total=100,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:55.505 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:55.505 00:41:55.505 Run status group 0 (all jobs): 00:41:55.505 READ: bw=9441KiB/s (9668kB/s), 389KiB/s-5746KiB/s (399kB/s-5884kB/s), io=9696KiB (9929kB), run=1001-1027msec 00:41:55.505 WRITE: bw=13.6MiB/s (14.3MB/s), 1994KiB/s-6138KiB/s (2042kB/s-6285kB/s), io=14.0MiB (14.7MB), run=1001-1027msec 00:41:55.505 00:41:55.505 Disk stats (read/write): 00:41:55.505 nvme0n1: ios=1179/1536, merge=0/0, ticks=715/406, in_queue=1121, util=94.39% 00:41:55.505 nvme0n2: ios=554/1024, merge=0/0, ticks=861/249, in_queue=1110, util=98.98% 00:41:55.505 nvme0n3: ios=90/512, merge=0/0, ticks=1427/159, in_queue=1586, util=97.19% 00:41:55.505 nvme0n4: ios=82/512, merge=0/0, ticks=1690/125, in_queue=1815, util=98.32% 00:41:55.505 21:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:41:55.505 [global] 00:41:55.505 thread=1 00:41:55.505 invalidate=1 00:41:55.505 rw=write 00:41:55.505 time_based=1 00:41:55.505 runtime=1 00:41:55.505 ioengine=libaio 00:41:55.505 direct=1 00:41:55.505 bs=4096 00:41:55.505 iodepth=128 00:41:55.505 norandommap=0 00:41:55.505 numjobs=1 00:41:55.505 00:41:55.505 verify_dump=1 00:41:55.505 verify_backlog=512 00:41:55.505 verify_state_save=0 00:41:55.505 do_verify=1 00:41:55.505 verify=crc32c-intel 00:41:55.505 [job0] 00:41:55.505 filename=/dev/nvme0n1 00:41:55.505 [job1] 00:41:55.505 filename=/dev/nvme0n2 00:41:55.505 [job2] 00:41:55.505 filename=/dev/nvme0n3 00:41:55.505 [job3] 00:41:55.505 filename=/dev/nvme0n4 00:41:55.505 Could not set queue depth (nvme0n1) 00:41:55.505 Could not set queue depth (nvme0n2) 00:41:55.505 Could not set queue depth (nvme0n3) 00:41:55.505 Could not set queue depth (nvme0n4) 00:41:55.505 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:55.505 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:55.505 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:55.505 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:55.505 fio-3.35 00:41:55.505 Starting 4 threads 00:41:56.880 00:41:56.880 job0: (groupid=0, jobs=1): err= 0: pid=3220143: Tue Nov 19 21:31:30 2024 00:41:56.880 read: IOPS=2151, BW=8605KiB/s (8811kB/s)(8708KiB/1012msec) 00:41:56.880 slat (usec): min=2, max=40961, avg=265.67, stdev=1887.75 00:41:56.880 clat (usec): min=604, max=99219, avg=27253.16, stdev=23116.85 00:41:56.880 lat (usec): min=5375, max=99257, avg=27518.83, stdev=23322.32 00:41:56.880 clat percentiles (usec): 00:41:56.880 | 1.00th=[ 8455], 5.00th=[12780], 10.00th=[13304], 20.00th=[13960], 00:41:56.880 | 30.00th=[14484], 40.00th=[14746], 50.00th=[14746], 60.00th=[15270], 00:41:56.880 | 70.00th=[19268], 80.00th=[49021], 90.00th=[72877], 95.00th=[78119], 00:41:56.880 | 99.00th=[88605], 99.50th=[90702], 99.90th=[96994], 99.95th=[98042], 00:41:56.880 | 99.99th=[99091] 00:41:56.880 write: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec); 0 zone resets 00:41:56.880 slat (usec): min=3, max=19791, avg=151.69, stdev=934.47 00:41:56.880 clat (usec): min=3905, max=90799, avg=26903.22, stdev=17598.43 00:41:56.880 lat (usec): min=3916, max=90813, avg=27054.91, stdev=17631.23 00:41:56.880 clat percentiles (usec): 00:41:56.880 | 1.00th=[ 4047], 5.00th=[10683], 10.00th=[12780], 20.00th=[13960], 00:41:56.880 | 30.00th=[14484], 40.00th=[17171], 50.00th=[23725], 60.00th=[27395], 00:41:56.880 | 70.00th=[28443], 80.00th=[34866], 90.00th=[49021], 95.00th=[72877], 00:41:56.880 | 99.00th=[81265], 99.50th=[81265], 99.90th=[82314], 99.95th=[86508], 00:41:56.880 | 99.99th=[90702] 00:41:56.880 bw ( KiB/s): min= 8272, max=12208, per=19.40%, avg=10240.00, stdev=2783.17, samples=2 00:41:56.880 iops : min= 2068, max= 3052, avg=2560.00, stdev=695.79, samples=2 00:41:56.880 lat (usec) : 750=0.02% 00:41:56.880 lat (msec) : 4=0.32%, 10=3.00%, 20=52.88%, 50=29.39%, 100=14.40% 00:41:56.880 cpu : usr=2.67%, sys=4.75%, ctx=244, majf=0, minf=1 00:41:56.880 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:41:56.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:56.880 issued rwts: total=2177,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:56.880 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:56.880 job1: (groupid=0, jobs=1): err= 0: pid=3220144: Tue Nov 19 21:31:30 2024 00:41:56.880 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:41:56.880 slat (usec): min=2, max=14174, avg=122.20, stdev=910.06 00:41:56.880 clat (usec): min=1368, max=69438, avg=16700.35, stdev=7451.37 00:41:56.880 lat (usec): min=1383, max=69448, avg=16822.55, stdev=7529.78 00:41:56.880 clat percentiles (usec): 00:41:56.880 | 1.00th=[ 6915], 5.00th=[ 9634], 10.00th=[11600], 20.00th=[12911], 00:41:56.880 | 30.00th=[13304], 40.00th=[13960], 50.00th=[15008], 60.00th=[15926], 00:41:56.880 | 70.00th=[17695], 80.00th=[19530], 90.00th=[22938], 95.00th=[26608], 00:41:56.880 | 99.00th=[56886], 99.50th=[64750], 99.90th=[69731], 99.95th=[69731], 00:41:56.880 | 99.99th=[69731] 00:41:56.880 write: IOPS=3988, BW=15.6MiB/s (16.3MB/s)(15.8MiB/1012msec); 0 zone resets 00:41:56.880 slat (usec): min=3, max=14068, avg=110.24, stdev=789.49 00:41:56.880 clat (usec): min=1479, max=69449, avg=16917.51, stdev=10734.57 00:41:56.880 lat (usec): min=1493, max=69464, avg=17027.75, stdev=10805.40 00:41:56.880 clat percentiles (usec): 00:41:56.880 | 1.00th=[ 4883], 5.00th=[ 7832], 10.00th=[ 9110], 20.00th=[11207], 00:41:56.880 | 30.00th=[12649], 40.00th=[14091], 50.00th=[14877], 60.00th=[15270], 00:41:56.880 | 70.00th=[15926], 80.00th=[17695], 90.00th=[27395], 95.00th=[33817], 00:41:56.880 | 99.00th=[64226], 99.50th=[64750], 99.90th=[65274], 99.95th=[69731], 00:41:56.880 | 99.99th=[69731] 00:41:56.880 bw ( KiB/s): min=15352, max=15912, per=29.61%, avg=15632.00, stdev=395.98, samples=2 00:41:56.880 iops : min= 3838, max= 3978, avg=3908.00, stdev=98.99, samples=2 00:41:56.880 lat (msec) : 2=0.13%, 10=10.35%, 20=74.03%, 50=12.89%, 100=2.60% 00:41:56.880 cpu : usr=5.04%, sys=9.40%, ctx=308, majf=0, minf=1 00:41:56.880 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:56.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:56.880 issued rwts: total=3584,4036,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:56.880 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:56.880 job2: (groupid=0, jobs=1): err= 0: pid=3220146: Tue Nov 19 21:31:30 2024 00:41:56.880 read: IOPS=2363, BW=9456KiB/s (9683kB/s)(9484KiB/1003msec) 00:41:56.880 slat (usec): min=2, max=26561, avg=183.08, stdev=1343.63 00:41:56.880 clat (usec): min=696, max=104233, avg=22519.62, stdev=16020.87 00:41:56.880 lat (msec): min=3, max=104, avg=22.70, stdev=16.08 00:41:56.880 clat percentiles (msec): 00:41:56.880 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 14], 20.00th=[ 14], 00:41:56.880 | 30.00th=[ 15], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 19], 00:41:56.880 | 70.00th=[ 23], 80.00th=[ 31], 90.00th=[ 36], 95.00th=[ 66], 00:41:56.880 | 99.00th=[ 91], 99.50th=[ 91], 99.90th=[ 91], 99.95th=[ 91], 00:41:56.880 | 99.99th=[ 105] 00:41:56.880 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:41:56.880 slat (usec): min=3, max=14258, avg=208.47, stdev=1048.10 00:41:56.880 clat (usec): min=1547, max=123386, avg=28840.35, stdev=21705.67 00:41:56.880 lat (usec): min=1568, max=123410, avg=29048.82, stdev=21804.37 00:41:56.880 clat percentiles (msec): 00:41:56.880 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 15], 00:41:56.880 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 24], 60.00th=[ 28], 00:41:56.880 | 70.00th=[ 29], 80.00th=[ 39], 90.00th=[ 54], 95.00th=[ 74], 00:41:56.880 | 99.00th=[ 120], 99.50th=[ 123], 99.90th=[ 124], 99.95th=[ 124], 00:41:56.880 | 99.99th=[ 124] 00:41:56.880 bw ( KiB/s): min= 8577, max=11920, per=19.41%, avg=10248.50, stdev=2363.86, samples=2 00:41:56.880 iops : min= 2144, max= 2980, avg=2562.00, stdev=591.14, samples=2 00:41:56.880 lat (usec) : 750=0.02% 00:41:56.880 lat (msec) : 2=0.20%, 4=0.18%, 10=3.53%, 20=50.80%, 50=36.08% 00:41:56.880 lat (msec) : 100=7.89%, 250=1.30% 00:41:56.880 cpu : usr=3.59%, sys=5.89%, ctx=322, majf=0, minf=2 00:41:56.880 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:41:56.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:56.880 issued rwts: total=2371,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:56.880 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:56.880 job3: (groupid=0, jobs=1): err= 0: pid=3220147: Tue Nov 19 21:31:30 2024 00:41:56.880 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:41:56.881 slat (usec): min=2, max=13698, avg=116.81, stdev=844.42 00:41:56.881 clat (usec): min=1633, max=29372, avg=15394.26, stdev=3223.57 00:41:56.881 lat (usec): min=1650, max=29379, avg=15511.06, stdev=3303.31 00:41:56.881 clat percentiles (usec): 00:41:56.881 | 1.00th=[ 6652], 5.00th=[11207], 10.00th=[11994], 20.00th=[12911], 00:41:56.881 | 30.00th=[13698], 40.00th=[14615], 50.00th=[15795], 60.00th=[16188], 00:41:56.881 | 70.00th=[16712], 80.00th=[17171], 90.00th=[18482], 95.00th=[21627], 00:41:56.881 | 99.00th=[24511], 99.50th=[27132], 99.90th=[29230], 99.95th=[29230], 00:41:56.881 | 99.99th=[29492] 00:41:56.881 write: IOPS=4169, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1007msec); 0 zone resets 00:41:56.881 slat (usec): min=3, max=12722, avg=103.78, stdev=683.05 00:41:56.881 clat (usec): min=1104, max=38569, avg=15253.29, stdev=2993.13 00:41:56.881 lat (usec): min=1115, max=38573, avg=15357.08, stdev=3028.09 00:41:56.881 clat percentiles (usec): 00:41:56.881 | 1.00th=[ 7046], 5.00th=[ 9110], 10.00th=[11469], 20.00th=[13566], 00:41:56.881 | 30.00th=[14615], 40.00th=[15270], 50.00th=[15664], 60.00th=[16057], 00:41:56.881 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17957], 95.00th=[19006], 00:41:56.881 | 99.00th=[22152], 99.50th=[23725], 99.90th=[34341], 99.95th=[34341], 00:41:56.881 | 99.99th=[38536] 00:41:56.881 bw ( KiB/s): min=16416, max=16432, per=31.11%, avg=16424.00, stdev=11.31, samples=2 00:41:56.881 iops : min= 4104, max= 4108, avg=4106.00, stdev= 2.83, samples=2 00:41:56.881 lat (msec) : 2=0.11%, 4=0.23%, 10=5.02%, 20=89.16%, 50=5.49% 00:41:56.881 cpu : usr=4.27%, sys=8.95%, ctx=291, majf=0, minf=2 00:41:56.881 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:56.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:56.881 issued rwts: total=4096,4199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:56.881 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:56.881 00:41:56.881 Run status group 0 (all jobs): 00:41:56.881 READ: bw=47.2MiB/s (49.5MB/s), 8605KiB/s-15.9MiB/s (8811kB/s-16.7MB/s), io=47.8MiB (50.1MB), run=1003-1012msec 00:41:56.881 WRITE: bw=51.5MiB/s (54.1MB/s), 9.88MiB/s-16.3MiB/s (10.4MB/s-17.1MB/s), io=52.2MiB (54.7MB), run=1003-1012msec 00:41:56.881 00:41:56.881 Disk stats (read/write): 00:41:56.881 nvme0n1: ios=2072/2136, merge=0/0, ticks=33521/50714, in_queue=84235, util=95.59% 00:41:56.881 nvme0n2: ios=3110/3150, merge=0/0, ticks=49598/51209, in_queue=100807, util=100.00% 00:41:56.881 nvme0n3: ios=2066/2048, merge=0/0, ticks=36973/60274, in_queue=97247, util=91.26% 00:41:56.881 nvme0n4: ios=3407/3584, merge=0/0, ticks=31158/31802, in_queue=62960, util=89.62% 00:41:56.881 21:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:41:56.881 [global] 00:41:56.881 thread=1 00:41:56.881 invalidate=1 00:41:56.881 rw=randwrite 00:41:56.881 time_based=1 00:41:56.881 runtime=1 00:41:56.881 ioengine=libaio 00:41:56.881 direct=1 00:41:56.881 bs=4096 00:41:56.881 iodepth=128 00:41:56.881 norandommap=0 00:41:56.881 numjobs=1 00:41:56.881 00:41:56.881 verify_dump=1 00:41:56.881 verify_backlog=512 00:41:56.881 verify_state_save=0 00:41:56.881 do_verify=1 00:41:56.881 verify=crc32c-intel 00:41:56.881 [job0] 00:41:56.881 filename=/dev/nvme0n1 00:41:56.881 [job1] 00:41:56.881 filename=/dev/nvme0n2 00:41:56.881 [job2] 00:41:56.881 filename=/dev/nvme0n3 00:41:56.881 [job3] 00:41:56.881 filename=/dev/nvme0n4 00:41:56.881 Could not set queue depth (nvme0n1) 00:41:56.881 Could not set queue depth (nvme0n2) 00:41:56.881 Could not set queue depth (nvme0n3) 00:41:56.881 Could not set queue depth (nvme0n4) 00:41:56.881 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:56.881 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:56.881 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:56.881 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:56.881 fio-3.35 00:41:56.881 Starting 4 threads 00:41:58.257 00:41:58.257 job0: (groupid=0, jobs=1): err= 0: pid=3220371: Tue Nov 19 21:31:31 2024 00:41:58.257 read: IOPS=4572, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1006msec) 00:41:58.257 slat (usec): min=2, max=8159, avg=102.35, stdev=644.79 00:41:58.257 clat (usec): min=819, max=28573, avg=13433.36, stdev=2550.11 00:41:58.257 lat (usec): min=5443, max=28577, avg=13535.71, stdev=2571.54 00:41:58.257 clat percentiles (usec): 00:41:58.257 | 1.00th=[ 6456], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11600], 00:41:58.257 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13042], 60.00th=[13435], 00:41:58.257 | 70.00th=[14091], 80.00th=[14877], 90.00th=[16909], 95.00th=[18482], 00:41:58.257 | 99.00th=[19792], 99.50th=[21103], 99.90th=[22676], 99.95th=[22676], 00:41:58.257 | 99.99th=[28443] 00:41:58.257 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:41:58.257 slat (usec): min=2, max=27339, avg=103.91, stdev=808.36 00:41:58.257 clat (usec): min=659, max=51346, avg=14289.14, stdev=4568.26 00:41:58.257 lat (usec): min=675, max=52568, avg=14393.05, stdev=4639.72 00:41:58.257 clat percentiles (usec): 00:41:58.257 | 1.00th=[ 6456], 5.00th=[ 9110], 10.00th=[11469], 20.00th=[11994], 00:41:58.257 | 30.00th=[12256], 40.00th=[12518], 50.00th=[13435], 60.00th=[14222], 00:41:58.257 | 70.00th=[14746], 80.00th=[15270], 90.00th=[17171], 95.00th=[25035], 00:41:58.257 | 99.00th=[31327], 99.50th=[32637], 99.90th=[35390], 99.95th=[35390], 00:41:58.257 | 99.99th=[51119] 00:41:58.257 bw ( KiB/s): min=16696, max=20168, per=42.56%, avg=18432.00, stdev=2455.07, samples=2 00:41:58.257 iops : min= 4174, max= 5042, avg=4608.00, stdev=613.77, samples=2 00:41:58.257 lat (usec) : 750=0.01%, 1000=0.01% 00:41:58.257 lat (msec) : 2=0.05%, 10=6.30%, 20=89.54%, 50=4.07%, 100=0.01% 00:41:58.257 cpu : usr=3.88%, sys=6.87%, ctx=343, majf=0, minf=1 00:41:58.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:41:58.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:58.257 issued rwts: total=4600,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:58.257 job1: (groupid=0, jobs=1): err= 0: pid=3220372: Tue Nov 19 21:31:31 2024 00:41:58.257 read: IOPS=1501, BW=6006KiB/s (6150kB/s)(6144KiB/1023msec) 00:41:58.257 slat (usec): min=2, max=8792, avg=168.17, stdev=914.39 00:41:58.257 clat (usec): min=11983, max=46030, avg=19256.78, stdev=4650.00 00:41:58.257 lat (usec): min=11992, max=46039, avg=19424.95, stdev=4777.42 00:41:58.257 clat percentiles (usec): 00:41:58.257 | 1.00th=[13042], 5.00th=[15664], 10.00th=[16712], 20.00th=[16909], 00:41:58.257 | 30.00th=[17171], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:41:58.257 | 70.00th=[18220], 80.00th=[20317], 90.00th=[23987], 95.00th=[31327], 00:41:58.257 | 99.00th=[38536], 99.50th=[40633], 99.90th=[45876], 99.95th=[45876], 00:41:58.257 | 99.99th=[45876] 00:41:58.257 write: IOPS=1766, BW=7065KiB/s (7235kB/s)(7228KiB/1023msec); 0 zone resets 00:41:58.257 slat (usec): min=4, max=54770, avg=402.29, stdev=2290.44 00:41:58.257 clat (msec): min=14, max=128, avg=53.39, stdev=28.47 00:41:58.257 lat (msec): min=14, max=128, avg=53.79, stdev=28.65 00:41:58.257 clat percentiles (msec): 00:41:58.257 | 1.00th=[ 18], 5.00th=[ 19], 10.00th=[ 27], 20.00th=[ 31], 00:41:58.257 | 30.00th=[ 38], 40.00th=[ 40], 50.00th=[ 42], 60.00th=[ 47], 00:41:58.257 | 70.00th=[ 65], 80.00th=[ 81], 90.00th=[ 99], 95.00th=[ 115], 00:41:58.257 | 99.00th=[ 129], 99.50th=[ 129], 99.90th=[ 129], 99.95th=[ 129], 00:41:58.257 | 99.99th=[ 129] 00:41:58.257 bw ( KiB/s): min= 6528, max= 6904, per=15.51%, avg=6716.00, stdev=265.87, samples=2 00:41:58.257 iops : min= 1632, max= 1726, avg=1679.00, stdev=66.47, samples=2 00:41:58.257 lat (msec) : 20=38.59%, 50=41.40%, 100=15.97%, 250=4.04% 00:41:58.257 cpu : usr=1.86%, sys=3.13%, ctx=212, majf=0, minf=1 00:41:58.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:41:58.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:58.257 issued rwts: total=1536,1807,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:58.257 job2: (groupid=0, jobs=1): err= 0: pid=3220373: Tue Nov 19 21:31:31 2024 00:41:58.257 read: IOPS=1563, BW=6253KiB/s (6403kB/s)(6616KiB/1058msec) 00:41:58.257 slat (usec): min=3, max=23035, avg=224.77, stdev=1425.20 00:41:58.257 clat (usec): min=9237, max=93005, avg=27866.06, stdev=18586.79 00:41:58.257 lat (usec): min=9248, max=93012, avg=28090.82, stdev=18688.32 00:41:58.257 clat percentiles (usec): 00:41:58.257 | 1.00th=[10814], 5.00th=[15795], 10.00th=[15795], 20.00th=[15926], 00:41:58.257 | 30.00th=[19530], 40.00th=[20841], 50.00th=[21103], 60.00th=[22152], 00:41:58.257 | 70.00th=[22938], 80.00th=[27919], 90.00th=[60031], 95.00th=[77071], 00:41:58.257 | 99.00th=[90702], 99.50th=[91751], 99.90th=[92799], 99.95th=[92799], 00:41:58.257 | 99.99th=[92799] 00:41:58.257 write: IOPS=1935, BW=7743KiB/s (7929kB/s)(8192KiB/1058msec); 0 zone resets 00:41:58.257 slat (usec): min=4, max=23670, avg=300.46, stdev=1344.72 00:41:58.257 clat (usec): min=1236, max=93473, avg=43018.56, stdev=17970.23 00:41:58.257 lat (usec): min=1262, max=93495, avg=43319.03, stdev=18085.95 00:41:58.257 clat percentiles (usec): 00:41:58.257 | 1.00th=[10552], 5.00th=[20579], 10.00th=[21103], 20.00th=[24773], 00:41:58.257 | 30.00th=[28705], 40.00th=[36963], 50.00th=[39584], 60.00th=[47973], 00:41:58.258 | 70.00th=[57410], 80.00th=[64750], 90.00th=[67634], 95.00th=[68682], 00:41:58.258 | 99.00th=[72877], 99.50th=[90702], 99.90th=[93848], 99.95th=[93848], 00:41:58.258 | 99.99th=[93848] 00:41:58.258 bw ( KiB/s): min= 8112, max= 8192, per=18.82%, avg=8152.00, stdev=56.57, samples=2 00:41:58.258 iops : min= 2028, max= 2048, avg=2038.00, stdev=14.14, samples=2 00:41:58.258 lat (msec) : 2=0.03%, 10=0.73%, 20=15.05%, 50=58.64%, 100=25.55% 00:41:58.258 cpu : usr=1.80%, sys=2.84%, ctx=228, majf=0, minf=1 00:41:58.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:41:58.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:58.258 issued rwts: total=1654,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:58.258 job3: (groupid=0, jobs=1): err= 0: pid=3220374: Tue Nov 19 21:31:31 2024 00:41:58.258 read: IOPS=2502, BW=9.77MiB/s (10.2MB/s)(10.0MiB/1023msec) 00:41:58.258 slat (usec): min=2, max=20638, avg=166.95, stdev=1284.89 00:41:58.258 clat (usec): min=3194, max=69709, avg=19140.95, stdev=8543.11 00:41:58.258 lat (usec): min=3199, max=69713, avg=19307.90, stdev=8702.52 00:41:58.258 clat percentiles (usec): 00:41:58.258 | 1.00th=[ 8717], 5.00th=[10159], 10.00th=[13042], 20.00th=[13829], 00:41:58.258 | 30.00th=[14484], 40.00th=[14746], 50.00th=[16909], 60.00th=[19006], 00:41:58.258 | 70.00th=[19530], 80.00th=[20841], 90.00th=[33817], 95.00th=[38011], 00:41:58.258 | 99.00th=[53740], 99.50th=[56886], 99.90th=[69731], 99.95th=[69731], 00:41:58.258 | 99.99th=[69731] 00:41:58.258 write: IOPS=2923, BW=11.4MiB/s (12.0MB/s)(11.7MiB/1023msec); 0 zone resets 00:41:58.258 slat (usec): min=2, max=12470, avg=182.51, stdev=941.09 00:41:58.258 clat (usec): min=2458, max=90086, avg=27004.85, stdev=19345.83 00:41:58.258 lat (usec): min=2463, max=90093, avg=27187.36, stdev=19458.16 00:41:58.258 clat percentiles (usec): 00:41:58.258 | 1.00th=[ 4228], 5.00th=[ 8291], 10.00th=[10159], 20.00th=[13173], 00:41:58.258 | 30.00th=[15139], 40.00th=[15795], 50.00th=[17695], 60.00th=[19792], 00:41:58.258 | 70.00th=[36439], 80.00th=[40633], 90.00th=[58459], 95.00th=[68682], 00:41:58.258 | 99.00th=[83362], 99.50th=[87557], 99.90th=[89654], 99.95th=[89654], 00:41:58.258 | 99.99th=[89654] 00:41:58.258 bw ( KiB/s): min= 6520, max=16416, per=26.48%, avg=11468.00, stdev=6997.53, samples=2 00:41:58.258 iops : min= 1630, max= 4104, avg=2867.00, stdev=1749.38, samples=2 00:41:58.258 lat (msec) : 4=0.43%, 10=5.78%, 20=61.47%, 50=23.78%, 100=8.54% 00:41:58.258 cpu : usr=1.57%, sys=2.35%, ctx=248, majf=0, minf=1 00:41:58.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:41:58.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:58.258 issued rwts: total=2560,2991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:58.258 00:41:58.258 Run status group 0 (all jobs): 00:41:58.258 READ: bw=38.2MiB/s (40.1MB/s), 6006KiB/s-17.9MiB/s (6150kB/s-18.7MB/s), io=40.4MiB (42.4MB), run=1006-1058msec 00:41:58.258 WRITE: bw=42.3MiB/s (44.3MB/s), 7065KiB/s-17.9MiB/s (7235kB/s-18.8MB/s), io=44.7MiB (46.9MB), run=1006-1058msec 00:41:58.258 00:41:58.258 Disk stats (read/write): 00:41:58.258 nvme0n1: ios=3697/4096, merge=0/0, ticks=24391/32449, in_queue=56840, util=87.37% 00:41:58.258 nvme0n2: ios=1047/1495, merge=0/0, ticks=10641/42241, in_queue=52882, util=98.17% 00:41:58.258 nvme0n3: ios=1536/1663, merge=0/0, ticks=36646/65062, in_queue=101708, util=88.94% 00:41:58.258 nvme0n4: ios=2509/2560, merge=0/0, ticks=34886/38939, in_queue=73825, util=91.38% 00:41:58.258 21:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:41:58.258 21:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3220514 00:41:58.258 21:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:41:58.258 21:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:41:58.258 [global] 00:41:58.258 thread=1 00:41:58.258 invalidate=1 00:41:58.258 rw=read 00:41:58.258 time_based=1 00:41:58.258 runtime=10 00:41:58.258 ioengine=libaio 00:41:58.258 direct=1 00:41:58.258 bs=4096 00:41:58.258 iodepth=1 00:41:58.258 norandommap=1 00:41:58.258 numjobs=1 00:41:58.258 00:41:58.258 [job0] 00:41:58.258 filename=/dev/nvme0n1 00:41:58.258 [job1] 00:41:58.258 filename=/dev/nvme0n2 00:41:58.258 [job2] 00:41:58.258 filename=/dev/nvme0n3 00:41:58.258 [job3] 00:41:58.258 filename=/dev/nvme0n4 00:41:58.258 Could not set queue depth (nvme0n1) 00:41:58.258 Could not set queue depth (nvme0n2) 00:41:58.258 Could not set queue depth (nvme0n3) 00:41:58.258 Could not set queue depth (nvme0n4) 00:41:58.516 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:58.516 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:58.516 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:58.516 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:58.516 fio-3.35 00:41:58.516 Starting 4 threads 00:42:01.821 21:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:42:01.821 21:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:42:01.821 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=753664, buflen=4096 00:42:01.821 fio: pid=3220662, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:01.821 21:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:01.821 21:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:42:01.821 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=638976, buflen=4096 00:42:01.821 fio: pid=3220651, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:02.079 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=5267456, buflen=4096 00:42:02.079 fio: pid=3220612, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:02.079 21:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:02.079 21:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:42:02.646 21:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:02.646 21:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:42:02.646 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=45551616, buflen=4096 00:42:02.646 fio: pid=3220623, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:02.646 00:42:02.646 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3220612: Tue Nov 19 21:31:36 2024 00:42:02.646 read: IOPS=368, BW=1471KiB/s (1507kB/s)(5144KiB/3496msec) 00:42:02.646 slat (usec): min=4, max=21631, avg=37.77, stdev=684.70 00:42:02.646 clat (usec): min=204, max=41129, avg=2659.36, stdev=9328.24 00:42:02.646 lat (usec): min=218, max=41171, avg=2697.15, stdev=9348.18 00:42:02.646 clat percentiles (usec): 00:42:02.646 | 1.00th=[ 247], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 285], 00:42:02.646 | 30.00th=[ 293], 40.00th=[ 314], 50.00th=[ 383], 60.00th=[ 400], 00:42:02.646 | 70.00th=[ 445], 80.00th=[ 494], 90.00th=[ 586], 95.00th=[41157], 00:42:02.646 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:02.646 | 99.99th=[41157] 00:42:02.646 bw ( KiB/s): min= 104, max= 2320, per=8.93%, avg=1172.00, stdev=917.16, samples=6 00:42:02.646 iops : min= 26, max= 580, avg=293.00, stdev=229.29, samples=6 00:42:02.646 lat (usec) : 250=1.24%, 500=80.81%, 750=10.26%, 1000=1.32% 00:42:02.646 lat (msec) : 2=0.62%, 10=0.08%, 50=5.59% 00:42:02.646 cpu : usr=0.20%, sys=0.77%, ctx=1292, majf=0, minf=1 00:42:02.646 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:02.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.646 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.646 issued rwts: total=1287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.646 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:02.646 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3220623: Tue Nov 19 21:31:36 2024 00:42:02.646 read: IOPS=2863, BW=11.2MiB/s (11.7MB/s)(43.4MiB/3884msec) 00:42:02.646 slat (usec): min=5, max=13473, avg=11.37, stdev=146.96 00:42:02.646 clat (usec): min=199, max=84654, avg=333.20, stdev=1273.35 00:42:02.646 lat (usec): min=205, max=84667, avg=344.57, stdev=1282.23 00:42:02.646 clat percentiles (usec): 00:42:02.646 | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 249], 00:42:02.646 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 277], 00:42:02.646 | 70.00th=[ 293], 80.00th=[ 338], 90.00th=[ 420], 95.00th=[ 506], 00:42:02.646 | 99.00th=[ 611], 99.50th=[ 668], 99.90th=[ 2114], 99.95th=[41157], 00:42:02.646 | 99.99th=[41157] 00:42:02.646 bw ( KiB/s): min=10376, max=14920, per=94.49%, avg=12405.00, stdev=1791.86, samples=7 00:42:02.646 iops : min= 2594, max= 3730, avg=3101.14, stdev=448.07, samples=7 00:42:02.646 lat (usec) : 250=24.64%, 500=69.92%, 750=5.02%, 1000=0.23% 00:42:02.646 lat (msec) : 2=0.07%, 4=0.02%, 10=0.02%, 50=0.06%, 100=0.01% 00:42:02.646 cpu : usr=1.88%, sys=4.22%, ctx=11126, majf=0, minf=2 00:42:02.646 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:02.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.646 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.646 issued rwts: total=11122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.646 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:02.646 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3220651: Tue Nov 19 21:31:36 2024 00:42:02.646 read: IOPS=49, BW=195KiB/s (200kB/s)(624KiB/3196msec) 00:42:02.647 slat (usec): min=8, max=15919, avg=119.70, stdev=1269.08 00:42:02.647 clat (usec): min=351, max=42862, avg=20216.08, stdev=20330.89 00:42:02.647 lat (usec): min=371, max=56988, avg=20336.45, stdev=20473.52 00:42:02.647 clat percentiles (usec): 00:42:02.647 | 1.00th=[ 355], 5.00th=[ 359], 10.00th=[ 371], 20.00th=[ 392], 00:42:02.647 | 30.00th=[ 494], 40.00th=[ 537], 50.00th=[ 717], 60.00th=[40633], 00:42:02.647 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:02.647 | 99.00th=[41681], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:42:02.647 | 99.99th=[42730] 00:42:02.647 bw ( KiB/s): min= 120, max= 256, per=1.54%, avg=202.67, stdev=61.90, samples=6 00:42:02.647 iops : min= 30, max= 64, avg=50.67, stdev=15.47, samples=6 00:42:02.647 lat (usec) : 500=31.85%, 750=18.47%, 1000=0.64% 00:42:02.647 lat (msec) : 50=48.41% 00:42:02.647 cpu : usr=0.19%, sys=0.00%, ctx=158, majf=0, minf=2 00:42:02.647 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:02.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.647 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.647 issued rwts: total=157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.647 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:02.647 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3220662: Tue Nov 19 21:31:36 2024 00:42:02.647 read: IOPS=62, BW=251KiB/s (257kB/s)(736KiB/2938msec) 00:42:02.647 slat (nsec): min=6988, max=45471, avg=19370.06, stdev=8948.45 00:42:02.647 clat (usec): min=325, max=41606, avg=15814.92, stdev=19616.02 00:42:02.647 lat (usec): min=342, max=41629, avg=15834.24, stdev=19614.54 00:42:02.647 clat percentiles (usec): 00:42:02.647 | 1.00th=[ 326], 5.00th=[ 367], 10.00th=[ 429], 20.00th=[ 494], 00:42:02.647 | 30.00th=[ 537], 40.00th=[ 562], 50.00th=[ 578], 60.00th=[ 676], 00:42:02.647 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:02.647 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:42:02.647 | 99.99th=[41681] 00:42:02.647 bw ( KiB/s): min= 208, max= 336, per=2.03%, avg=267.20, stdev=54.44, samples=5 00:42:02.647 iops : min= 52, max= 84, avg=66.80, stdev=13.61, samples=5 00:42:02.647 lat (usec) : 500=21.08%, 750=40.54% 00:42:02.647 lat (msec) : 50=37.84% 00:42:02.647 cpu : usr=0.03%, sys=0.20%, ctx=185, majf=0, minf=1 00:42:02.647 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:02.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.647 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.647 issued rwts: total=185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.647 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:02.647 00:42:02.647 Run status group 0 (all jobs): 00:42:02.647 READ: bw=12.8MiB/s (13.4MB/s), 195KiB/s-11.2MiB/s (200kB/s-11.7MB/s), io=49.8MiB (52.2MB), run=2938-3884msec 00:42:02.647 00:42:02.647 Disk stats (read/write): 00:42:02.647 nvme0n1: ios=1048/0, merge=0/0, ticks=3705/0, in_queue=3705, util=99.94% 00:42:02.647 nvme0n2: ios=11120/0, merge=0/0, ticks=3523/0, in_queue=3523, util=96.21% 00:42:02.647 nvme0n3: ios=154/0, merge=0/0, ticks=3074/0, in_queue=3074, util=96.29% 00:42:02.647 nvme0n4: ios=182/0, merge=0/0, ticks=2832/0, in_queue=2832, util=96.78% 00:42:02.904 21:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:02.904 21:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:42:03.163 21:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:03.163 21:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:42:03.422 21:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:03.422 21:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:42:03.680 21:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:03.680 21:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:42:04.245 21:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:42:04.245 21:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3220514 00:42:04.245 21:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:42:04.245 21:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:04.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:04.811 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:04.811 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:42:04.811 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:04.811 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:04.811 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:04.811 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:04.811 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:42:04.811 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:42:04.811 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:42:04.811 nvmf hotplug test: fio failed as expected 00:42:04.811 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:05.068 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:42:05.068 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:42:05.069 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:42:05.069 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:42:05.069 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:42:05.069 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:05.069 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:42:05.069 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:05.069 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:42:05.069 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:05.069 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:05.069 rmmod nvme_tcp 00:42:05.326 rmmod nvme_fabrics 00:42:05.326 rmmod nvme_keyring 00:42:05.326 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:05.326 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:42:05.326 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:42:05.326 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3218485 ']' 00:42:05.327 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3218485 00:42:05.327 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3218485 ']' 00:42:05.327 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3218485 00:42:05.327 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:42:05.327 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:05.327 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3218485 00:42:05.327 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:05.327 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:05.327 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3218485' 00:42:05.327 killing process with pid 3218485 00:42:05.327 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3218485 00:42:05.327 21:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3218485 00:42:06.703 21:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:06.703 21:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:06.703 21:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:06.703 21:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:42:06.703 21:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:06.703 21:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:42:06.703 21:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:42:06.703 21:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:06.703 21:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:06.703 21:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:06.703 21:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:06.703 21:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:08.613 00:42:08.613 real 0m26.887s 00:42:08.613 user 1m12.969s 00:42:08.613 sys 0m10.241s 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:08.613 ************************************ 00:42:08.613 END TEST nvmf_fio_target 00:42:08.613 ************************************ 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:08.613 ************************************ 00:42:08.613 START TEST nvmf_bdevio 00:42:08.613 ************************************ 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:08.613 * Looking for test storage... 00:42:08.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:08.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:08.613 --rc genhtml_branch_coverage=1 00:42:08.613 --rc genhtml_function_coverage=1 00:42:08.613 --rc genhtml_legend=1 00:42:08.613 --rc geninfo_all_blocks=1 00:42:08.613 --rc geninfo_unexecuted_blocks=1 00:42:08.613 00:42:08.613 ' 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:08.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:08.613 --rc genhtml_branch_coverage=1 00:42:08.613 --rc genhtml_function_coverage=1 00:42:08.613 --rc genhtml_legend=1 00:42:08.613 --rc geninfo_all_blocks=1 00:42:08.613 --rc geninfo_unexecuted_blocks=1 00:42:08.613 00:42:08.613 ' 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:08.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:08.613 --rc genhtml_branch_coverage=1 00:42:08.613 --rc genhtml_function_coverage=1 00:42:08.613 --rc genhtml_legend=1 00:42:08.613 --rc geninfo_all_blocks=1 00:42:08.613 --rc geninfo_unexecuted_blocks=1 00:42:08.613 00:42:08.613 ' 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:08.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:08.613 --rc genhtml_branch_coverage=1 00:42:08.613 --rc genhtml_function_coverage=1 00:42:08.613 --rc genhtml_legend=1 00:42:08.613 --rc geninfo_all_blocks=1 00:42:08.613 --rc geninfo_unexecuted_blocks=1 00:42:08.613 00:42:08.613 ' 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:08.613 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:42:08.614 21:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:11.149 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:11.149 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:11.149 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:11.150 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:11.150 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:11.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:11.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:42:11.150 00:42:11.150 --- 10.0.0.2 ping statistics --- 00:42:11.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:11.150 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:11.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:11.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:42:11.150 00:42:11.150 --- 10.0.0.1 ping statistics --- 00:42:11.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:11.150 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3223485 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3223485 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3223485 ']' 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:11.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:11.150 21:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:11.150 [2024-11-19 21:31:44.611099] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:11.150 [2024-11-19 21:31:44.613439] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:42:11.150 [2024-11-19 21:31:44.613540] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:11.150 [2024-11-19 21:31:44.756682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:11.150 [2024-11-19 21:31:44.892465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:11.150 [2024-11-19 21:31:44.892543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:11.150 [2024-11-19 21:31:44.892572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:11.150 [2024-11-19 21:31:44.892594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:11.150 [2024-11-19 21:31:44.892631] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:11.150 [2024-11-19 21:31:44.895408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:42:11.150 [2024-11-19 21:31:44.895469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:42:11.150 [2024-11-19 21:31:44.895536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:11.150 [2024-11-19 21:31:44.895548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:42:11.717 [2024-11-19 21:31:45.261930] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:11.717 [2024-11-19 21:31:45.271421] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:11.717 [2024-11-19 21:31:45.271610] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:11.717 [2024-11-19 21:31:45.272458] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:11.717 [2024-11-19 21:31:45.272811] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:11.976 [2024-11-19 21:31:45.620639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:11.976 Malloc0 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:11.976 [2024-11-19 21:31:45.744937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:11.976 { 00:42:11.976 "params": { 00:42:11.976 "name": "Nvme$subsystem", 00:42:11.976 "trtype": "$TEST_TRANSPORT", 00:42:11.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:11.976 "adrfam": "ipv4", 00:42:11.976 "trsvcid": "$NVMF_PORT", 00:42:11.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:11.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:11.976 "hdgst": ${hdgst:-false}, 00:42:11.976 "ddgst": ${ddgst:-false} 00:42:11.976 }, 00:42:11.976 "method": "bdev_nvme_attach_controller" 00:42:11.976 } 00:42:11.976 EOF 00:42:11.976 )") 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:42:11.976 21:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:11.976 "params": { 00:42:11.976 "name": "Nvme1", 00:42:11.976 "trtype": "tcp", 00:42:11.976 "traddr": "10.0.0.2", 00:42:11.976 "adrfam": "ipv4", 00:42:11.976 "trsvcid": "4420", 00:42:11.976 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:11.976 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:11.976 "hdgst": false, 00:42:11.976 "ddgst": false 00:42:11.976 }, 00:42:11.976 "method": "bdev_nvme_attach_controller" 00:42:11.976 }' 00:42:12.235 [2024-11-19 21:31:45.831730] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:42:12.235 [2024-11-19 21:31:45.831860] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3223637 ] 00:42:12.235 [2024-11-19 21:31:45.972447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:12.493 [2024-11-19 21:31:46.107782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:12.493 [2024-11-19 21:31:46.107831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:12.493 [2024-11-19 21:31:46.107837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:13.059 I/O targets: 00:42:13.059 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:42:13.059 00:42:13.059 00:42:13.060 CUnit - A unit testing framework for C - Version 2.1-3 00:42:13.060 http://cunit.sourceforge.net/ 00:42:13.060 00:42:13.060 00:42:13.060 Suite: bdevio tests on: Nvme1n1 00:42:13.060 Test: blockdev write read block ...passed 00:42:13.060 Test: blockdev write zeroes read block ...passed 00:42:13.060 Test: blockdev write zeroes read no split ...passed 00:42:13.060 Test: blockdev write zeroes read split ...passed 00:42:13.060 Test: blockdev write zeroes read split partial ...passed 00:42:13.060 Test: blockdev reset ...[2024-11-19 21:31:46.768454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:42:13.060 [2024-11-19 21:31:46.768631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:42:13.060 [2024-11-19 21:31:46.776247] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:42:13.060 passed 00:42:13.060 Test: blockdev write read 8 blocks ...passed 00:42:13.060 Test: blockdev write read size > 128k ...passed 00:42:13.060 Test: blockdev write read invalid size ...passed 00:42:13.318 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:13.318 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:13.318 Test: blockdev write read max offset ...passed 00:42:13.318 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:13.318 Test: blockdev writev readv 8 blocks ...passed 00:42:13.318 Test: blockdev writev readv 30 x 1block ...passed 00:42:13.318 Test: blockdev writev readv block ...passed 00:42:13.318 Test: blockdev writev readv size > 128k ...passed 00:42:13.318 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:13.318 Test: blockdev comparev and writev ...[2024-11-19 21:31:47.073817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:13.318 [2024-11-19 21:31:47.073868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:13.318 [2024-11-19 21:31:47.073906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:13.318 [2024-11-19 21:31:47.073933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:13.318 [2024-11-19 21:31:47.074505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:13.318 [2024-11-19 21:31:47.074548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:42:13.318 [2024-11-19 21:31:47.074585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:13.318 [2024-11-19 21:31:47.074611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:42:13.318 [2024-11-19 21:31:47.075141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:13.318 [2024-11-19 21:31:47.075175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:42:13.318 [2024-11-19 21:31:47.075213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:13.318 [2024-11-19 21:31:47.075239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:42:13.318 [2024-11-19 21:31:47.075759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:13.318 [2024-11-19 21:31:47.075792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:42:13.318 [2024-11-19 21:31:47.075825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:13.318 [2024-11-19 21:31:47.075854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:42:13.576 passed 00:42:13.576 Test: blockdev nvme passthru rw ...passed 00:42:13.576 Test: blockdev nvme passthru vendor specific ...[2024-11-19 21:31:47.158426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:13.576 [2024-11-19 21:31:47.158466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:42:13.576 [2024-11-19 21:31:47.158688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:13.576 [2024-11-19 21:31:47.158722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:42:13.576 [2024-11-19 21:31:47.158930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:13.576 [2024-11-19 21:31:47.158961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:42:13.576 [2024-11-19 21:31:47.159202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:13.576 [2024-11-19 21:31:47.159234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:42:13.576 passed 00:42:13.576 Test: blockdev nvme admin passthru ...passed 00:42:13.576 Test: blockdev copy ...passed 00:42:13.576 00:42:13.576 Run Summary: Type Total Ran Passed Failed Inactive 00:42:13.576 suites 1 1 n/a 0 0 00:42:13.576 tests 23 23 23 0 0 00:42:13.576 asserts 152 152 152 0 n/a 00:42:13.576 00:42:13.576 Elapsed time = 1.319 seconds 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:14.535 rmmod nvme_tcp 00:42:14.535 rmmod nvme_fabrics 00:42:14.535 rmmod nvme_keyring 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3223485 ']' 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3223485 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3223485 ']' 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3223485 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3223485 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3223485' 00:42:14.535 killing process with pid 3223485 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3223485 00:42:14.535 21:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3223485 00:42:15.912 21:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:15.912 21:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:15.912 21:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:15.912 21:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:42:15.912 21:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:42:15.912 21:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:15.912 21:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:42:15.912 21:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:15.912 21:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:15.912 21:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:15.912 21:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:15.912 21:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:17.816 21:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:17.816 00:42:17.816 real 0m9.262s 00:42:17.816 user 0m16.578s 00:42:17.816 sys 0m3.126s 00:42:17.816 21:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:17.816 21:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:17.816 ************************************ 00:42:17.816 END TEST nvmf_bdevio 00:42:17.816 ************************************ 00:42:17.816 21:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:42:17.816 00:42:17.816 real 4m29.603s 00:42:17.816 user 9m53.283s 00:42:17.816 sys 1m28.270s 00:42:17.816 21:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:17.816 21:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:17.816 ************************************ 00:42:17.816 END TEST nvmf_target_core_interrupt_mode 00:42:17.816 ************************************ 00:42:17.816 21:31:51 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:17.816 21:31:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:17.816 21:31:51 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:17.816 21:31:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:17.816 ************************************ 00:42:17.816 START TEST nvmf_interrupt 00:42:17.816 ************************************ 00:42:17.816 21:31:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:18.075 * Looking for test storage... 00:42:18.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:18.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:18.075 --rc genhtml_branch_coverage=1 00:42:18.075 --rc genhtml_function_coverage=1 00:42:18.075 --rc genhtml_legend=1 00:42:18.075 --rc geninfo_all_blocks=1 00:42:18.075 --rc geninfo_unexecuted_blocks=1 00:42:18.075 00:42:18.075 ' 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:18.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:18.075 --rc genhtml_branch_coverage=1 00:42:18.075 --rc genhtml_function_coverage=1 00:42:18.075 --rc genhtml_legend=1 00:42:18.075 --rc geninfo_all_blocks=1 00:42:18.075 --rc geninfo_unexecuted_blocks=1 00:42:18.075 00:42:18.075 ' 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:18.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:18.075 --rc genhtml_branch_coverage=1 00:42:18.075 --rc genhtml_function_coverage=1 00:42:18.075 --rc genhtml_legend=1 00:42:18.075 --rc geninfo_all_blocks=1 00:42:18.075 --rc geninfo_unexecuted_blocks=1 00:42:18.075 00:42:18.075 ' 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:18.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:18.075 --rc genhtml_branch_coverage=1 00:42:18.075 --rc genhtml_function_coverage=1 00:42:18.075 --rc genhtml_legend=1 00:42:18.075 --rc geninfo_all_blocks=1 00:42:18.075 --rc geninfo_unexecuted_blocks=1 00:42:18.075 00:42:18.075 ' 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:18.075 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:42:18.076 21:31:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:19.980 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:19.980 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:19.980 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:19.980 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:19.980 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:20.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:20.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:42:20.240 00:42:20.240 --- 10.0.0.2 ping statistics --- 00:42:20.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:20.240 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:20.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:20.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:42:20.240 00:42:20.240 --- 10.0.0.1 ping statistics --- 00:42:20.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:20.240 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3225986 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3225986 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3225986 ']' 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:20.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:20.240 21:31:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:20.240 [2024-11-19 21:31:53.974506] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:20.240 [2024-11-19 21:31:53.977113] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:42:20.240 [2024-11-19 21:31:53.977220] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:20.499 [2024-11-19 21:31:54.122560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:20.499 [2024-11-19 21:31:54.257061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:20.499 [2024-11-19 21:31:54.257154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:20.499 [2024-11-19 21:31:54.257183] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:20.499 [2024-11-19 21:31:54.257204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:20.499 [2024-11-19 21:31:54.257235] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:20.499 [2024-11-19 21:31:54.259734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:20.499 [2024-11-19 21:31:54.259740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:21.066 [2024-11-19 21:31:54.624469] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:21.066 [2024-11-19 21:31:54.625228] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:21.066 [2024-11-19 21:31:54.625597] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:21.324 21:31:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:21.324 21:31:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:42:21.324 21:31:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:21.324 21:31:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:21.324 21:31:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:21.324 21:31:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:21.324 21:31:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:42:21.324 21:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:42:21.324 21:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:42:21.324 21:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:42:21.324 5000+0 records in 00:42:21.324 5000+0 records out 00:42:21.324 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0114086 s, 898 MB/s 00:42:21.324 21:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:42:21.324 21:31:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.324 21:31:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:21.324 AIO0 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:21.324 [2024-11-19 21:31:55.012841] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:21.324 [2024-11-19 21:31:55.041003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3225986 0 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3225986 0 idle 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3225986 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3225986 -w 256 00:42:21.324 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:21.583 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3225986 root 20 0 20.1t 196608 100608 S 0.0 0.3 0:00.75 reactor_0' 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3225986 root 20 0 20.1t 196608 100608 S 0.0 0.3 0:00.75 reactor_0 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3225986 1 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3225986 1 idle 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3225986 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3225986 -w 256 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3225998 root 20 0 20.1t 196608 100608 S 0.0 0.3 0:00.00 reactor_1' 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3225998 root 20 0 20.1t 196608 100608 S 0.0 0.3 0:00.00 reactor_1 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:21.584 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3226166 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3225986 0 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3225986 0 busy 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3225986 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3225986 -w 256 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3225986 root 20 0 20.1t 197760 101376 S 0.0 0.3 0:00.76 reactor_0' 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3225986 root 20 0 20.1t 197760 101376 S 0.0 0.3 0:00.76 reactor_0 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:21.843 21:31:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:42:22.779 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:42:22.779 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:22.779 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3225986 -w 256 00:42:22.779 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3225986 root 20 0 20.1t 210048 101760 R 99.9 0.3 0:02.95 reactor_0' 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3225986 root 20 0 20.1t 210048 101760 R 99.9 0.3 0:02.95 reactor_0 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3225986 1 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3225986 1 busy 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3225986 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3225986 -w 256 00:42:23.037 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:23.296 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3225998 root 20 0 20.1t 210048 101760 R 93.3 0.3 0:01.22 reactor_1' 00:42:23.296 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3225998 root 20 0 20.1t 210048 101760 R 93.3 0.3 0:01.22 reactor_1 00:42:23.296 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:23.296 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:23.296 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:42:23.296 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:42:23.296 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:23.296 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:23.296 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:23.296 21:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:23.296 21:31:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3226166 00:42:33.269 Initializing NVMe Controllers 00:42:33.269 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:33.269 Controller IO queue size 256, less than required. 00:42:33.269 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:33.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:42:33.270 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:42:33.270 Initialization complete. Launching workers. 00:42:33.270 ======================================================== 00:42:33.270 Latency(us) 00:42:33.270 Device Information : IOPS MiB/s Average min max 00:42:33.270 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 10378.87 40.54 24689.25 6648.48 65988.08 00:42:33.270 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 10616.27 41.47 24135.86 6638.08 29635.48 00:42:33.270 ======================================================== 00:42:33.270 Total : 20995.14 82.01 24409.42 6638.08 65988.08 00:42:33.270 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3225986 0 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3225986 0 idle 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3225986 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3225986 -w 256 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3225986 root 20 0 20.1t 210048 101760 S 0.0 0.3 0:20.23 reactor_0' 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3225986 root 20 0 20.1t 210048 101760 S 0.0 0.3 0:20.23 reactor_0 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3225986 1 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3225986 1 idle 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3225986 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3225986 -w 256 00:42:33.270 21:32:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:33.270 21:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3225998 root 20 0 20.1t 210048 101760 S 0.0 0.3 0:09.50 reactor_1' 00:42:33.270 21:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3225998 root 20 0 20.1t 210048 101760 S 0.0 0.3 0:09.50 reactor_1 00:42:33.270 21:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:33.270 21:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:33.270 21:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:33.270 21:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:33.270 21:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:33.270 21:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:33.270 21:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:33.270 21:32:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:33.270 21:32:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:33.270 21:32:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:42:33.270 21:32:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:42:33.270 21:32:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:42:33.270 21:32:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:42:33.270 21:32:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:42:34.648 21:32:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:42:34.648 21:32:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:42:34.648 21:32:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:42:34.648 21:32:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:42:34.648 21:32:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:42:34.648 21:32:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:42:34.648 21:32:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:34.648 21:32:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3225986 0 00:42:34.648 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3225986 0 idle 00:42:34.648 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3225986 00:42:34.648 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:34.648 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:34.648 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:34.648 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:34.648 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:34.648 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:34.648 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:34.648 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:34.648 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:34.648 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3225986 -w 256 00:42:34.648 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3225986 root 20 0 20.1t 237696 111360 S 0.0 0.4 0:20.39 reactor_0' 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3225986 root 20 0 20.1t 237696 111360 S 0.0 0.4 0:20.39 reactor_0 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3225986 1 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3225986 1 idle 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3225986 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3225986 -w 256 00:42:34.914 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:35.239 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3225998 root 20 0 20.1t 237696 111360 S 0.0 0.4 0:09.57 reactor_1' 00:42:35.239 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3225998 root 20 0 20.1t 237696 111360 S 0.0 0.4 0:09.57 reactor_1 00:42:35.239 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:35.239 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:35.239 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:35.239 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:35.239 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:35.239 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:35.239 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:35.239 21:32:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:35.239 21:32:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:35.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:35.498 rmmod nvme_tcp 00:42:35.498 rmmod nvme_fabrics 00:42:35.498 rmmod nvme_keyring 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3225986 ']' 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3225986 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3225986 ']' 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3225986 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3225986 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3225986' 00:42:35.498 killing process with pid 3225986 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3225986 00:42:35.498 21:32:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3225986 00:42:36.872 21:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:36.872 21:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:36.872 21:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:36.872 21:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:42:36.872 21:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:42:36.872 21:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:36.872 21:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:42:36.872 21:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:36.872 21:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:36.873 21:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:36.873 21:32:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:36.873 21:32:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:38.775 21:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:38.775 00:42:38.775 real 0m20.803s 00:42:38.775 user 0m38.845s 00:42:38.775 sys 0m6.715s 00:42:38.775 21:32:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:38.775 21:32:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:38.775 ************************************ 00:42:38.775 END TEST nvmf_interrupt 00:42:38.775 ************************************ 00:42:38.775 00:42:38.775 real 35m36.983s 00:42:38.775 user 93m27.885s 00:42:38.775 sys 7m51.897s 00:42:38.775 21:32:12 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:38.775 21:32:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:38.775 ************************************ 00:42:38.775 END TEST nvmf_tcp 00:42:38.775 ************************************ 00:42:38.775 21:32:12 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:42:38.775 21:32:12 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:38.775 21:32:12 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:38.775 21:32:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:38.775 21:32:12 -- common/autotest_common.sh@10 -- # set +x 00:42:38.775 ************************************ 00:42:38.775 START TEST spdkcli_nvmf_tcp 00:42:38.775 ************************************ 00:42:38.775 21:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:38.775 * Looking for test storage... 00:42:38.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:42:38.775 21:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:38.775 21:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:42:38.775 21:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:39.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:39.034 --rc genhtml_branch_coverage=1 00:42:39.034 --rc genhtml_function_coverage=1 00:42:39.034 --rc genhtml_legend=1 00:42:39.034 --rc geninfo_all_blocks=1 00:42:39.034 --rc geninfo_unexecuted_blocks=1 00:42:39.034 00:42:39.034 ' 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:39.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:39.034 --rc genhtml_branch_coverage=1 00:42:39.034 --rc genhtml_function_coverage=1 00:42:39.034 --rc genhtml_legend=1 00:42:39.034 --rc geninfo_all_blocks=1 00:42:39.034 --rc geninfo_unexecuted_blocks=1 00:42:39.034 00:42:39.034 ' 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:39.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:39.034 --rc genhtml_branch_coverage=1 00:42:39.034 --rc genhtml_function_coverage=1 00:42:39.034 --rc genhtml_legend=1 00:42:39.034 --rc geninfo_all_blocks=1 00:42:39.034 --rc geninfo_unexecuted_blocks=1 00:42:39.034 00:42:39.034 ' 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:39.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:39.034 --rc genhtml_branch_coverage=1 00:42:39.034 --rc genhtml_function_coverage=1 00:42:39.034 --rc genhtml_legend=1 00:42:39.034 --rc geninfo_all_blocks=1 00:42:39.034 --rc geninfo_unexecuted_blocks=1 00:42:39.034 00:42:39.034 ' 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:39.034 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:39.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3228304 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3228304 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3228304 ']' 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:39.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:39.035 21:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:39.035 [2024-11-19 21:32:12.708757] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:42:39.035 [2024-11-19 21:32:12.708904] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3228304 ] 00:42:39.294 [2024-11-19 21:32:12.850428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:39.294 [2024-11-19 21:32:12.990122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:39.294 [2024-11-19 21:32:12.990124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:39.860 21:32:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:39.860 21:32:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:42:39.860 21:32:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:42:39.860 21:32:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:39.860 21:32:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:40.119 21:32:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:42:40.119 21:32:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:42:40.119 21:32:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:42:40.119 21:32:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:40.119 21:32:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:40.119 21:32:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:42:40.119 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:42:40.119 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:42:40.119 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:42:40.119 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:42:40.119 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:42:40.119 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:42:40.119 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:40.119 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:42:40.119 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:42:40.119 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:40.119 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:40.119 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:42:40.119 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:40.119 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:40.119 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:42:40.119 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:40.119 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:40.119 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:40.119 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:40.119 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:42:40.119 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:42:40.119 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:40.119 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:42:40.119 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:40.119 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:42:40.119 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:42:40.119 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:42:40.119 ' 00:42:43.402 [2024-11-19 21:32:16.493891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:44.335 [2024-11-19 21:32:17.779795] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:42:46.866 [2024-11-19 21:32:20.159459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:42:48.764 [2024-11-19 21:32:22.210044] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:42:50.138 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:42:50.138 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:42:50.138 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:42:50.138 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:42:50.138 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:42:50.138 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:42:50.138 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:42:50.138 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:50.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:42:50.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:42:50.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:50.138 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:50.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:42:50.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:50.138 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:50.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:42:50.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:50.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:50.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:50.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:50.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:42:50.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:42:50.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:50.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:42:50.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:50.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:42:50.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:42:50.138 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:42:50.138 21:32:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:42:50.138 21:32:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:50.138 21:32:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:50.138 21:32:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:42:50.138 21:32:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:50.138 21:32:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:50.138 21:32:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:42:50.138 21:32:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:42:50.704 21:32:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:42:50.704 21:32:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:42:50.704 21:32:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:42:50.704 21:32:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:50.704 21:32:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:50.704 21:32:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:42:50.704 21:32:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:50.704 21:32:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:50.704 21:32:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:42:50.704 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:42:50.704 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:50.704 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:42:50.704 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:42:50.704 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:42:50.704 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:42:50.704 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:50.704 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:42:50.704 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:42:50.704 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:42:50.704 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:42:50.704 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:42:50.704 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:42:50.704 ' 00:42:57.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:42:57.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:42:57.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:57.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:42:57.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:42:57.265 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:42:57.265 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:42:57.265 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:57.265 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:42:57.265 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:42:57.265 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:42:57.265 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:42:57.265 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:42:57.265 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:42:57.265 21:32:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:42:57.265 21:32:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:57.265 21:32:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:57.265 21:32:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3228304 00:42:57.265 21:32:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3228304 ']' 00:42:57.265 21:32:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3228304 00:42:57.265 21:32:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:42:57.265 21:32:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:57.265 21:32:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3228304 00:42:57.265 21:32:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:57.265 21:32:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:57.265 21:32:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3228304' 00:42:57.265 killing process with pid 3228304 00:42:57.265 21:32:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3228304 00:42:57.265 21:32:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3228304 00:42:57.832 21:32:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:42:57.832 21:32:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:42:57.832 21:32:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3228304 ']' 00:42:57.832 21:32:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3228304 00:42:57.832 21:32:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3228304 ']' 00:42:57.832 21:32:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3228304 00:42:57.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3228304) - No such process 00:42:57.832 21:32:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3228304 is not found' 00:42:57.832 Process with pid 3228304 is not found 00:42:57.832 21:32:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:42:57.832 21:32:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:42:57.832 21:32:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:42:57.832 00:42:57.832 real 0m19.024s 00:42:57.832 user 0m39.921s 00:42:57.832 sys 0m1.005s 00:42:57.832 21:32:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:57.832 21:32:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:57.832 ************************************ 00:42:57.832 END TEST spdkcli_nvmf_tcp 00:42:57.832 ************************************ 00:42:57.832 21:32:31 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:57.832 21:32:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:57.832 21:32:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:57.832 21:32:31 -- common/autotest_common.sh@10 -- # set +x 00:42:57.832 ************************************ 00:42:57.832 START TEST nvmf_identify_passthru 00:42:57.832 ************************************ 00:42:57.832 21:32:31 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:57.832 * Looking for test storage... 00:42:57.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:57.832 21:32:31 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:57.832 21:32:31 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:42:57.832 21:32:31 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:58.092 21:32:31 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:42:58.092 21:32:31 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:58.093 21:32:31 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:58.093 21:32:31 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:42:58.093 21:32:31 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:58.093 21:32:31 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:58.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:58.093 --rc genhtml_branch_coverage=1 00:42:58.093 --rc genhtml_function_coverage=1 00:42:58.093 --rc genhtml_legend=1 00:42:58.093 --rc geninfo_all_blocks=1 00:42:58.093 --rc geninfo_unexecuted_blocks=1 00:42:58.093 00:42:58.093 ' 00:42:58.093 21:32:31 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:58.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:58.093 --rc genhtml_branch_coverage=1 00:42:58.093 --rc genhtml_function_coverage=1 00:42:58.093 --rc genhtml_legend=1 00:42:58.093 --rc geninfo_all_blocks=1 00:42:58.093 --rc geninfo_unexecuted_blocks=1 00:42:58.093 00:42:58.093 ' 00:42:58.093 21:32:31 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:58.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:58.093 --rc genhtml_branch_coverage=1 00:42:58.093 --rc genhtml_function_coverage=1 00:42:58.093 --rc genhtml_legend=1 00:42:58.093 --rc geninfo_all_blocks=1 00:42:58.093 --rc geninfo_unexecuted_blocks=1 00:42:58.093 00:42:58.093 ' 00:42:58.093 21:32:31 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:58.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:58.093 --rc genhtml_branch_coverage=1 00:42:58.093 --rc genhtml_function_coverage=1 00:42:58.093 --rc genhtml_legend=1 00:42:58.093 --rc geninfo_all_blocks=1 00:42:58.093 --rc geninfo_unexecuted_blocks=1 00:42:58.093 00:42:58.093 ' 00:42:58.093 21:32:31 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:58.093 21:32:31 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:58.093 21:32:31 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:58.093 21:32:31 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:58.093 21:32:31 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:58.093 21:32:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:58.093 21:32:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:58.093 21:32:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:58.093 21:32:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:58.093 21:32:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:58.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:58.093 21:32:31 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:58.093 21:32:31 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:58.093 21:32:31 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:58.093 21:32:31 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:58.093 21:32:31 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:58.093 21:32:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:58.093 21:32:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:58.093 21:32:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:58.093 21:32:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:58.093 21:32:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:58.093 21:32:31 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:58.093 21:32:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:58.093 21:32:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:58.093 21:32:31 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:42:58.093 21:32:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:43:00.067 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:43:00.067 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:43:00.067 Found net devices under 0000:0a:00.0: cvl_0_0 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:43:00.067 Found net devices under 0000:0a:00.1: cvl_0_1 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:00.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:00.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:43:00.067 00:43:00.067 --- 10.0.0.2 ping statistics --- 00:43:00.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:00.067 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:00.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:00.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:43:00.067 00:43:00.067 --- 10.0.0.1 ping statistics --- 00:43:00.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:00.067 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:00.067 21:32:33 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:00.067 21:32:33 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:43:00.067 21:32:33 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:00.068 21:32:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:00.068 21:32:33 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:43:00.068 21:32:33 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:43:00.068 21:32:33 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:43:00.068 21:32:33 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:43:00.068 21:32:33 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:43:00.068 21:32:33 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:43:00.068 21:32:33 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:43:00.068 21:32:33 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:43:00.068 21:32:33 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:43:00.068 21:32:33 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:43:00.068 21:32:33 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:43:00.068 21:32:33 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:43:00.068 21:32:33 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:43:00.068 21:32:33 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:43:00.068 21:32:33 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:43:00.068 21:32:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:43:00.068 21:32:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:43:00.068 21:32:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:43:05.355 21:32:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:43:05.356 21:32:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:43:05.356 21:32:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:43:05.356 21:32:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:43:09.537 21:32:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:43:09.537 21:32:42 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:43:09.537 21:32:42 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:09.537 21:32:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:09.537 21:32:42 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:43:09.537 21:32:42 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:09.537 21:32:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:09.537 21:32:42 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3233190 00:43:09.537 21:32:42 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:43:09.537 21:32:42 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:09.537 21:32:42 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3233190 00:43:09.537 21:32:42 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3233190 ']' 00:43:09.537 21:32:42 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:09.537 21:32:42 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:09.537 21:32:42 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:09.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:09.537 21:32:42 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:09.537 21:32:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:09.537 [2024-11-19 21:32:42.758110] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:43:09.537 [2024-11-19 21:32:42.758267] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:09.537 [2024-11-19 21:32:42.906346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:09.537 [2024-11-19 21:32:43.043724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:09.537 [2024-11-19 21:32:43.043814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:09.537 [2024-11-19 21:32:43.043839] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:09.537 [2024-11-19 21:32:43.043863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:09.537 [2024-11-19 21:32:43.043881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:09.537 [2024-11-19 21:32:43.046736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:09.537 [2024-11-19 21:32:43.046814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:09.537 [2024-11-19 21:32:43.046903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:09.537 [2024-11-19 21:32:43.046909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:10.103 21:32:43 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:10.103 21:32:43 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:43:10.103 21:32:43 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:43:10.103 21:32:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.103 21:32:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:10.103 INFO: Log level set to 20 00:43:10.103 INFO: Requests: 00:43:10.103 { 00:43:10.103 "jsonrpc": "2.0", 00:43:10.103 "method": "nvmf_set_config", 00:43:10.103 "id": 1, 00:43:10.103 "params": { 00:43:10.103 "admin_cmd_passthru": { 00:43:10.103 "identify_ctrlr": true 00:43:10.103 } 00:43:10.103 } 00:43:10.103 } 00:43:10.103 00:43:10.103 INFO: response: 00:43:10.103 { 00:43:10.103 "jsonrpc": "2.0", 00:43:10.103 "id": 1, 00:43:10.103 "result": true 00:43:10.103 } 00:43:10.103 00:43:10.103 21:32:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.103 21:32:43 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:43:10.103 21:32:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.103 21:32:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:10.103 INFO: Setting log level to 20 00:43:10.103 INFO: Setting log level to 20 00:43:10.103 INFO: Log level set to 20 00:43:10.103 INFO: Log level set to 20 00:43:10.103 INFO: Requests: 00:43:10.103 { 00:43:10.103 "jsonrpc": "2.0", 00:43:10.103 "method": "framework_start_init", 00:43:10.103 "id": 1 00:43:10.103 } 00:43:10.103 00:43:10.103 INFO: Requests: 00:43:10.103 { 00:43:10.103 "jsonrpc": "2.0", 00:43:10.103 "method": "framework_start_init", 00:43:10.103 "id": 1 00:43:10.103 } 00:43:10.103 00:43:10.361 [2024-11-19 21:32:44.117032] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:43:10.361 INFO: response: 00:43:10.361 { 00:43:10.361 "jsonrpc": "2.0", 00:43:10.361 "id": 1, 00:43:10.361 "result": true 00:43:10.361 } 00:43:10.361 00:43:10.361 INFO: response: 00:43:10.361 { 00:43:10.361 "jsonrpc": "2.0", 00:43:10.361 "id": 1, 00:43:10.361 "result": true 00:43:10.361 } 00:43:10.361 00:43:10.361 21:32:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.361 21:32:44 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:10.361 21:32:44 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.361 21:32:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:10.361 INFO: Setting log level to 40 00:43:10.361 INFO: Setting log level to 40 00:43:10.361 INFO: Setting log level to 40 00:43:10.361 [2024-11-19 21:32:44.129979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:10.361 21:32:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.361 21:32:44 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:43:10.361 21:32:44 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:10.361 21:32:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:10.619 21:32:44 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:43:10.619 21:32:44 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.619 21:32:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:13.900 Nvme0n1 00:43:13.900 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:13.900 21:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:43:13.900 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:13.900 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:13.900 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:13.901 21:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:43:13.901 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:13.901 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:13.901 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:13.901 21:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:13.901 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:13.901 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:13.901 [2024-11-19 21:32:47.089883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:13.901 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:13.901 21:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:43:13.901 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:13.901 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:13.901 [ 00:43:13.901 { 00:43:13.901 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:43:13.901 "subtype": "Discovery", 00:43:13.901 "listen_addresses": [], 00:43:13.901 "allow_any_host": true, 00:43:13.901 "hosts": [] 00:43:13.901 }, 00:43:13.901 { 00:43:13.901 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:43:13.901 "subtype": "NVMe", 00:43:13.901 "listen_addresses": [ 00:43:13.901 { 00:43:13.901 "trtype": "TCP", 00:43:13.901 "adrfam": "IPv4", 00:43:13.901 "traddr": "10.0.0.2", 00:43:13.901 "trsvcid": "4420" 00:43:13.901 } 00:43:13.901 ], 00:43:13.901 "allow_any_host": true, 00:43:13.901 "hosts": [], 00:43:13.901 "serial_number": "SPDK00000000000001", 00:43:13.901 "model_number": "SPDK bdev Controller", 00:43:13.901 "max_namespaces": 1, 00:43:13.901 "min_cntlid": 1, 00:43:13.901 "max_cntlid": 65519, 00:43:13.901 "namespaces": [ 00:43:13.901 { 00:43:13.901 "nsid": 1, 00:43:13.901 "bdev_name": "Nvme0n1", 00:43:13.901 "name": "Nvme0n1", 00:43:13.901 "nguid": "B0E1AE27A0CA4E22AF2800B6EF46ED72", 00:43:13.901 "uuid": "b0e1ae27-a0ca-4e22-af28-00b6ef46ed72" 00:43:13.901 } 00:43:13.901 ] 00:43:13.901 } 00:43:13.901 ] 00:43:13.901 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:13.901 21:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:13.901 21:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:43:13.901 21:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:43:13.901 21:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:43:13.901 21:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:13.901 21:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:43:13.901 21:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:43:14.159 21:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:43:14.159 21:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:43:14.159 21:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:43:14.159 21:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:14.159 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.159 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:14.159 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.159 21:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:43:14.159 21:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:43:14.159 21:32:47 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:14.159 21:32:47 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:43:14.159 21:32:47 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:14.159 21:32:47 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:43:14.159 21:32:47 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:14.159 21:32:47 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:14.159 rmmod nvme_tcp 00:43:14.159 rmmod nvme_fabrics 00:43:14.159 rmmod nvme_keyring 00:43:14.159 21:32:47 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:14.159 21:32:47 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:43:14.159 21:32:47 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:43:14.159 21:32:47 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3233190 ']' 00:43:14.159 21:32:47 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3233190 00:43:14.159 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3233190 ']' 00:43:14.159 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3233190 00:43:14.159 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:43:14.159 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:14.159 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3233190 00:43:14.159 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:14.159 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:14.160 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3233190' 00:43:14.160 killing process with pid 3233190 00:43:14.160 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3233190 00:43:14.160 21:32:47 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3233190 00:43:16.696 21:32:50 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:16.696 21:32:50 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:16.696 21:32:50 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:16.696 21:32:50 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:43:16.696 21:32:50 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:43:16.696 21:32:50 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:16.696 21:32:50 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:43:16.696 21:32:50 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:16.696 21:32:50 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:16.696 21:32:50 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:16.696 21:32:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:16.696 21:32:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:18.597 21:32:52 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:18.597 00:43:18.597 real 0m20.782s 00:43:18.597 user 0m34.016s 00:43:18.597 sys 0m3.506s 00:43:18.597 21:32:52 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:18.597 21:32:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:18.597 ************************************ 00:43:18.597 END TEST nvmf_identify_passthru 00:43:18.597 ************************************ 00:43:18.597 21:32:52 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:18.597 21:32:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:18.597 21:32:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:18.597 21:32:52 -- common/autotest_common.sh@10 -- # set +x 00:43:18.597 ************************************ 00:43:18.597 START TEST nvmf_dif 00:43:18.597 ************************************ 00:43:18.597 21:32:52 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:18.857 * Looking for test storage... 00:43:18.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:18.857 21:32:52 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:18.857 21:32:52 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:43:18.857 21:32:52 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:18.857 21:32:52 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:43:18.857 21:32:52 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:18.857 21:32:52 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:18.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:18.857 --rc genhtml_branch_coverage=1 00:43:18.857 --rc genhtml_function_coverage=1 00:43:18.857 --rc genhtml_legend=1 00:43:18.857 --rc geninfo_all_blocks=1 00:43:18.857 --rc geninfo_unexecuted_blocks=1 00:43:18.857 00:43:18.857 ' 00:43:18.857 21:32:52 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:18.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:18.857 --rc genhtml_branch_coverage=1 00:43:18.857 --rc genhtml_function_coverage=1 00:43:18.857 --rc genhtml_legend=1 00:43:18.857 --rc geninfo_all_blocks=1 00:43:18.857 --rc geninfo_unexecuted_blocks=1 00:43:18.857 00:43:18.857 ' 00:43:18.857 21:32:52 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:18.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:18.857 --rc genhtml_branch_coverage=1 00:43:18.857 --rc genhtml_function_coverage=1 00:43:18.857 --rc genhtml_legend=1 00:43:18.857 --rc geninfo_all_blocks=1 00:43:18.857 --rc geninfo_unexecuted_blocks=1 00:43:18.857 00:43:18.857 ' 00:43:18.857 21:32:52 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:18.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:18.857 --rc genhtml_branch_coverage=1 00:43:18.857 --rc genhtml_function_coverage=1 00:43:18.857 --rc genhtml_legend=1 00:43:18.857 --rc geninfo_all_blocks=1 00:43:18.857 --rc geninfo_unexecuted_blocks=1 00:43:18.857 00:43:18.857 ' 00:43:18.857 21:32:52 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:18.857 21:32:52 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:18.857 21:32:52 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:18.857 21:32:52 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:18.857 21:32:52 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:18.857 21:32:52 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:43:18.857 21:32:52 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:18.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:18.857 21:32:52 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:43:18.857 21:32:52 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:43:18.857 21:32:52 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:43:18.857 21:32:52 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:43:18.857 21:32:52 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:18.857 21:32:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:18.857 21:32:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:18.857 21:32:52 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:43:18.857 21:32:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:43:20.758 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:20.758 21:32:54 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:43:20.759 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:43:20.759 Found net devices under 0000:0a:00.0: cvl_0_0 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:43:20.759 Found net devices under 0000:0a:00.1: cvl_0_1 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:20.759 21:32:54 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:21.018 21:32:54 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:21.018 21:32:54 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:21.018 21:32:54 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:21.018 21:32:54 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:21.018 21:32:54 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:21.018 21:32:54 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:21.018 21:32:54 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:21.018 21:32:54 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:21.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:21.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:43:21.018 00:43:21.018 --- 10.0.0.2 ping statistics --- 00:43:21.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:21.018 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:43:21.018 21:32:54 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:21.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:21.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:43:21.018 00:43:21.018 --- 10.0.0.1 ping statistics --- 00:43:21.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:21.018 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:43:21.018 21:32:54 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:21.018 21:32:54 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:43:21.018 21:32:54 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:21.018 21:32:54 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:21.951 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:21.951 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:43:21.951 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:21.951 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:21.951 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:21.951 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:21.951 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:21.951 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:22.210 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:22.210 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:22.210 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:22.210 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:22.210 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:22.210 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:22.210 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:22.210 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:22.210 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:22.210 21:32:55 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:22.210 21:32:55 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:22.210 21:32:55 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:22.210 21:32:55 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:22.210 21:32:55 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:22.210 21:32:55 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:22.210 21:32:55 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:43:22.210 21:32:55 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:43:22.210 21:32:55 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:22.210 21:32:55 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:22.210 21:32:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:22.210 21:32:55 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3236735 00:43:22.210 21:32:55 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:43:22.210 21:32:55 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3236735 00:43:22.210 21:32:55 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3236735 ']' 00:43:22.210 21:32:55 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:22.210 21:32:55 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:22.210 21:32:55 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:22.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:22.210 21:32:55 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:22.210 21:32:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:22.468 [2024-11-19 21:32:56.069057] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:43:22.468 [2024-11-19 21:32:56.069240] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:22.468 [2024-11-19 21:32:56.241564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:22.727 [2024-11-19 21:32:56.381160] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:22.727 [2024-11-19 21:32:56.381252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:22.727 [2024-11-19 21:32:56.381277] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:22.727 [2024-11-19 21:32:56.381300] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:22.727 [2024-11-19 21:32:56.381318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:22.727 [2024-11-19 21:32:56.382935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:23.661 21:32:57 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:23.661 21:32:57 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:43:23.661 21:32:57 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:23.661 21:32:57 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:23.661 21:32:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:23.661 21:32:57 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:23.661 21:32:57 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:43:23.662 21:32:57 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:43:23.662 21:32:57 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.662 21:32:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:23.662 [2024-11-19 21:32:57.122163] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:23.662 21:32:57 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.662 21:32:57 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:43:23.662 21:32:57 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:23.662 21:32:57 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:23.662 21:32:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:23.662 ************************************ 00:43:23.662 START TEST fio_dif_1_default 00:43:23.662 ************************************ 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:23.662 bdev_null0 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:23.662 [2024-11-19 21:32:57.178477] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:23.662 { 00:43:23.662 "params": { 00:43:23.662 "name": "Nvme$subsystem", 00:43:23.662 "trtype": "$TEST_TRANSPORT", 00:43:23.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:23.662 "adrfam": "ipv4", 00:43:23.662 "trsvcid": "$NVMF_PORT", 00:43:23.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:23.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:23.662 "hdgst": ${hdgst:-false}, 00:43:23.662 "ddgst": ${ddgst:-false} 00:43:23.662 }, 00:43:23.662 "method": "bdev_nvme_attach_controller" 00:43:23.662 } 00:43:23.662 EOF 00:43:23.662 )") 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:23.662 "params": { 00:43:23.662 "name": "Nvme0", 00:43:23.662 "trtype": "tcp", 00:43:23.662 "traddr": "10.0.0.2", 00:43:23.662 "adrfam": "ipv4", 00:43:23.662 "trsvcid": "4420", 00:43:23.662 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:23.662 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:23.662 "hdgst": false, 00:43:23.662 "ddgst": false 00:43:23.662 }, 00:43:23.662 "method": "bdev_nvme_attach_controller" 00:43:23.662 }' 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # break 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:23.662 21:32:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:23.920 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:23.920 fio-3.35 00:43:23.920 Starting 1 thread 00:43:36.120 00:43:36.120 filename0: (groupid=0, jobs=1): err= 0: pid=3237090: Tue Nov 19 21:33:08 2024 00:43:36.120 read: IOPS=190, BW=762KiB/s (780kB/s)(7616KiB/10001msec) 00:43:36.120 slat (nsec): min=5125, max=75237, avg=15104.88, stdev=4379.29 00:43:36.120 clat (usec): min=694, max=43501, avg=20964.30, stdev=20169.51 00:43:36.120 lat (usec): min=706, max=43525, avg=20979.40, stdev=20168.78 00:43:36.120 clat percentiles (usec): 00:43:36.120 | 1.00th=[ 725], 5.00th=[ 742], 10.00th=[ 758], 20.00th=[ 783], 00:43:36.120 | 30.00th=[ 807], 40.00th=[ 840], 50.00th=[ 1303], 60.00th=[41157], 00:43:36.120 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:36.120 | 99.00th=[41157], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:43:36.120 | 99.99th=[43254] 00:43:36.120 bw ( KiB/s): min= 672, max= 832, per=100.00%, avg=762.95, stdev=28.75, samples=19 00:43:36.120 iops : min= 168, max= 208, avg=190.74, stdev= 7.19, samples=19 00:43:36.120 lat (usec) : 750=7.93%, 1000=41.70% 00:43:36.120 lat (msec) : 2=0.37%, 50=50.00% 00:43:36.120 cpu : usr=92.33%, sys=7.09%, ctx=14, majf=0, minf=1634 00:43:36.120 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:36.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:36.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:36.120 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:36.120 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:36.120 00:43:36.120 Run status group 0 (all jobs): 00:43:36.121 READ: bw=762KiB/s (780kB/s), 762KiB/s-762KiB/s (780kB/s-780kB/s), io=7616KiB (7799kB), run=10001-10001msec 00:43:36.121 ----------------------------------------------------- 00:43:36.121 Suppressions used: 00:43:36.121 count bytes template 00:43:36.121 1 8 /usr/src/fio/parse.c 00:43:36.121 1 8 libtcmalloc_minimal.so 00:43:36.121 1 904 libcrypto.so 00:43:36.121 ----------------------------------------------------- 00:43:36.121 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.121 00:43:36.121 real 0m12.337s 00:43:36.121 user 0m11.429s 00:43:36.121 sys 0m1.175s 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:36.121 ************************************ 00:43:36.121 END TEST fio_dif_1_default 00:43:36.121 ************************************ 00:43:36.121 21:33:09 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:43:36.121 21:33:09 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:36.121 21:33:09 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:36.121 21:33:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:36.121 ************************************ 00:43:36.121 START TEST fio_dif_1_multi_subsystems 00:43:36.121 ************************************ 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:36.121 bdev_null0 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:36.121 [2024-11-19 21:33:09.557653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:36.121 bdev_null1 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:36.121 { 00:43:36.121 "params": { 00:43:36.121 "name": "Nvme$subsystem", 00:43:36.121 "trtype": "$TEST_TRANSPORT", 00:43:36.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:36.121 "adrfam": "ipv4", 00:43:36.121 "trsvcid": "$NVMF_PORT", 00:43:36.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:36.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:36.121 "hdgst": ${hdgst:-false}, 00:43:36.121 "ddgst": ${ddgst:-false} 00:43:36.121 }, 00:43:36.121 "method": "bdev_nvme_attach_controller" 00:43:36.121 } 00:43:36.121 EOF 00:43:36.121 )") 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:36.121 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:36.121 { 00:43:36.121 "params": { 00:43:36.121 "name": "Nvme$subsystem", 00:43:36.121 "trtype": "$TEST_TRANSPORT", 00:43:36.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:36.121 "adrfam": "ipv4", 00:43:36.122 "trsvcid": "$NVMF_PORT", 00:43:36.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:36.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:36.122 "hdgst": ${hdgst:-false}, 00:43:36.122 "ddgst": ${ddgst:-false} 00:43:36.122 }, 00:43:36.122 "method": "bdev_nvme_attach_controller" 00:43:36.122 } 00:43:36.122 EOF 00:43:36.122 )") 00:43:36.122 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:43:36.122 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:43:36.122 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:36.122 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:43:36.122 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:43:36.122 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:36.122 "params": { 00:43:36.122 "name": "Nvme0", 00:43:36.122 "trtype": "tcp", 00:43:36.122 "traddr": "10.0.0.2", 00:43:36.122 "adrfam": "ipv4", 00:43:36.122 "trsvcid": "4420", 00:43:36.122 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:36.122 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:36.122 "hdgst": false, 00:43:36.122 "ddgst": false 00:43:36.122 }, 00:43:36.122 "method": "bdev_nvme_attach_controller" 00:43:36.122 },{ 00:43:36.122 "params": { 00:43:36.122 "name": "Nvme1", 00:43:36.122 "trtype": "tcp", 00:43:36.122 "traddr": "10.0.0.2", 00:43:36.122 "adrfam": "ipv4", 00:43:36.122 "trsvcid": "4420", 00:43:36.122 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:36.122 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:36.122 "hdgst": false, 00:43:36.122 "ddgst": false 00:43:36.122 }, 00:43:36.122 "method": "bdev_nvme_attach_controller" 00:43:36.122 }' 00:43:36.122 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:36.122 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:36.122 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # break 00:43:36.122 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:36.122 21:33:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:36.122 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:36.122 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:36.122 fio-3.35 00:43:36.122 Starting 2 threads 00:43:48.319 00:43:48.319 filename0: (groupid=0, jobs=1): err= 0: pid=3239225: Tue Nov 19 21:33:21 2024 00:43:48.319 read: IOPS=96, BW=386KiB/s (396kB/s)(3872KiB/10023msec) 00:43:48.319 slat (nsec): min=6894, max=71114, avg=13862.49, stdev=4846.74 00:43:48.319 clat (usec): min=40847, max=44404, avg=41373.01, stdev=556.49 00:43:48.319 lat (usec): min=40858, max=44425, avg=41386.88, stdev=556.07 00:43:48.319 clat percentiles (usec): 00:43:48.319 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:48.319 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:48.319 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:48.319 | 99.00th=[42730], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:43:48.319 | 99.99th=[44303] 00:43:48.319 bw ( KiB/s): min= 352, max= 416, per=33.50%, avg=385.60, stdev=12.61, samples=20 00:43:48.319 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:43:48.319 lat (msec) : 50=100.00% 00:43:48.319 cpu : usr=94.20%, sys=5.31%, ctx=14, majf=0, minf=1636 00:43:48.319 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:48.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:48.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:48.319 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:48.319 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:48.319 filename1: (groupid=0, jobs=1): err= 0: pid=3239226: Tue Nov 19 21:33:21 2024 00:43:48.319 read: IOPS=191, BW=764KiB/s (783kB/s)(7648KiB/10007msec) 00:43:48.319 slat (nsec): min=7157, max=50301, avg=14133.45, stdev=5588.14 00:43:48.319 clat (usec): min=693, max=44528, avg=20890.84, stdev=20187.52 00:43:48.319 lat (usec): min=706, max=44572, avg=20904.98, stdev=20187.93 00:43:48.319 clat percentiles (usec): 00:43:48.319 | 1.00th=[ 717], 5.00th=[ 742], 10.00th=[ 750], 20.00th=[ 775], 00:43:48.319 | 30.00th=[ 799], 40.00th=[ 824], 50.00th=[ 1287], 60.00th=[41157], 00:43:48.319 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:48.319 | 99.00th=[41681], 99.50th=[41681], 99.90th=[44303], 99.95th=[44303], 00:43:48.319 | 99.99th=[44303] 00:43:48.319 bw ( KiB/s): min= 672, max= 832, per=66.38%, avg=763.20, stdev=34.86, samples=20 00:43:48.319 iops : min= 168, max= 208, avg=190.80, stdev= 8.72, samples=20 00:43:48.319 lat (usec) : 750=8.68%, 1000=40.90% 00:43:48.319 lat (msec) : 2=0.63%, 50=49.79% 00:43:48.319 cpu : usr=94.26%, sys=5.24%, ctx=25, majf=0, minf=1636 00:43:48.319 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:48.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:48.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:48.319 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:48.319 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:48.319 00:43:48.319 Run status group 0 (all jobs): 00:43:48.319 READ: bw=1149KiB/s (1177kB/s), 386KiB/s-764KiB/s (396kB/s-783kB/s), io=11.2MiB (11.8MB), run=10007-10023msec 00:43:48.319 ----------------------------------------------------- 00:43:48.319 Suppressions used: 00:43:48.319 count bytes template 00:43:48.319 2 16 /usr/src/fio/parse.c 00:43:48.319 1 8 libtcmalloc_minimal.so 00:43:48.319 1 904 libcrypto.so 00:43:48.319 ----------------------------------------------------- 00:43:48.319 00:43:48.319 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:43:48.319 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:43:48.319 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:48.319 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:48.319 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:43:48.320 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:48.320 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.320 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:48.320 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:48.320 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:48.320 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.320 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:48.320 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:48.320 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:48.320 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:48.320 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:43:48.320 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:48.320 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.320 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:48.320 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:48.320 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:48.320 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.320 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:48.320 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:48.320 00:43:48.320 real 0m12.576s 00:43:48.320 user 0m21.411s 00:43:48.320 sys 0m1.487s 00:43:48.320 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:48.320 21:33:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:48.320 ************************************ 00:43:48.320 END TEST fio_dif_1_multi_subsystems 00:43:48.320 ************************************ 00:43:48.578 21:33:22 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:43:48.578 21:33:22 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:48.578 21:33:22 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:48.578 21:33:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:48.578 ************************************ 00:43:48.578 START TEST fio_dif_rand_params 00:43:48.578 ************************************ 00:43:48.578 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:43:48.578 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:43:48.578 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:43:48.578 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:43:48.578 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:43:48.578 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:43:48.578 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:43:48.578 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:43:48.578 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:43:48.578 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:48.578 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:48.578 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:48.578 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:48.578 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:48.578 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.578 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:48.578 bdev_null0 00:43:48.578 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:48.579 [2024-11-19 21:33:22.176875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:48.579 { 00:43:48.579 "params": { 00:43:48.579 "name": "Nvme$subsystem", 00:43:48.579 "trtype": "$TEST_TRANSPORT", 00:43:48.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:48.579 "adrfam": "ipv4", 00:43:48.579 "trsvcid": "$NVMF_PORT", 00:43:48.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:48.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:48.579 "hdgst": ${hdgst:-false}, 00:43:48.579 "ddgst": ${ddgst:-false} 00:43:48.579 }, 00:43:48.579 "method": "bdev_nvme_attach_controller" 00:43:48.579 } 00:43:48.579 EOF 00:43:48.579 )") 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:48.579 "params": { 00:43:48.579 "name": "Nvme0", 00:43:48.579 "trtype": "tcp", 00:43:48.579 "traddr": "10.0.0.2", 00:43:48.579 "adrfam": "ipv4", 00:43:48.579 "trsvcid": "4420", 00:43:48.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:48.579 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:48.579 "hdgst": false, 00:43:48.579 "ddgst": false 00:43:48.579 }, 00:43:48.579 "method": "bdev_nvme_attach_controller" 00:43:48.579 }' 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:48.579 21:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:48.837 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:48.837 ... 00:43:48.837 fio-3.35 00:43:48.837 Starting 3 threads 00:43:55.397 00:43:55.397 filename0: (groupid=0, jobs=1): err= 0: pid=3240746: Tue Nov 19 21:33:28 2024 00:43:55.397 read: IOPS=191, BW=23.9MiB/s (25.1MB/s)(121MiB/5046msec) 00:43:55.397 slat (nsec): min=5995, max=49162, avg=22008.18, stdev=4965.99 00:43:55.397 clat (usec): min=5793, max=57525, avg=15598.86, stdev=6698.72 00:43:55.397 lat (usec): min=5812, max=57546, avg=15620.86, stdev=6698.53 00:43:55.397 clat percentiles (usec): 00:43:55.397 | 1.00th=[ 8848], 5.00th=[10421], 10.00th=[12649], 20.00th=[13698], 00:43:55.397 | 30.00th=[14091], 40.00th=[14484], 50.00th=[14746], 60.00th=[15139], 00:43:55.397 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16319], 95.00th=[16909], 00:43:55.397 | 99.00th=[51643], 99.50th=[56361], 99.90th=[57410], 99.95th=[57410], 00:43:55.397 | 99.99th=[57410] 00:43:55.397 bw ( KiB/s): min=16929, max=26880, per=34.05%, avg=24656.10, stdev=2931.67, samples=10 00:43:55.397 iops : min= 132, max= 210, avg=192.60, stdev=22.98, samples=10 00:43:55.397 lat (msec) : 10=3.31%, 20=93.37%, 50=1.76%, 100=1.55% 00:43:55.397 cpu : usr=90.07%, sys=7.55%, ctx=214, majf=0, minf=1636 00:43:55.397 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:55.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.397 issued rwts: total=966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.397 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:55.397 filename0: (groupid=0, jobs=1): err= 0: pid=3240747: Tue Nov 19 21:33:28 2024 00:43:55.397 read: IOPS=183, BW=23.0MiB/s (24.1MB/s)(116MiB/5044msec) 00:43:55.397 slat (nsec): min=6030, max=41105, avg=22613.66, stdev=2993.00 00:43:55.397 clat (usec): min=5790, max=58090, avg=16233.82, stdev=7420.20 00:43:55.397 lat (usec): min=5804, max=58109, avg=16256.43, stdev=7419.78 00:43:55.397 clat percentiles (usec): 00:43:55.397 | 1.00th=[ 5932], 5.00th=[10683], 10.00th=[12780], 20.00th=[14091], 00:43:55.397 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15270], 60.00th=[15533], 00:43:55.397 | 70.00th=[15795], 80.00th=[16319], 90.00th=[16909], 95.00th=[17695], 00:43:55.397 | 99.00th=[55313], 99.50th=[57934], 99.90th=[57934], 99.95th=[57934], 00:43:55.397 | 99.99th=[57934] 00:43:55.397 bw ( KiB/s): min=10752, max=26880, per=32.73%, avg=23705.60, stdev=4726.88, samples=10 00:43:55.397 iops : min= 84, max= 210, avg=185.20, stdev=36.93, samples=10 00:43:55.397 lat (msec) : 10=2.26%, 20=93.97%, 50=1.08%, 100=2.69% 00:43:55.397 cpu : usr=93.08%, sys=6.27%, ctx=12, majf=0, minf=1634 00:43:55.397 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:55.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.397 issued rwts: total=928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.397 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:55.397 filename0: (groupid=0, jobs=1): err= 0: pid=3240748: Tue Nov 19 21:33:28 2024 00:43:55.397 read: IOPS=191, BW=24.0MiB/s (25.2MB/s)(120MiB/5006msec) 00:43:55.397 slat (nsec): min=9741, max=54559, avg=20455.57, stdev=3416.08 00:43:55.397 clat (usec): min=5917, max=94059, avg=15596.18, stdev=6553.77 00:43:55.397 lat (usec): min=5937, max=94079, avg=15616.63, stdev=6553.75 00:43:55.397 clat percentiles (usec): 00:43:55.397 | 1.00th=[ 6325], 5.00th=[ 8979], 10.00th=[10290], 20.00th=[13435], 00:43:55.397 | 30.00th=[14091], 40.00th=[14615], 50.00th=[15008], 60.00th=[15664], 00:43:55.397 | 70.00th=[16319], 80.00th=[16909], 90.00th=[17957], 95.00th=[18482], 00:43:55.397 | 99.00th=[52167], 99.50th=[54264], 99.90th=[93848], 99.95th=[93848], 00:43:55.397 | 99.99th=[93848] 00:43:55.397 bw ( KiB/s): min=22272, max=27136, per=33.90%, avg=24550.40, stdev=1335.41, samples=10 00:43:55.397 iops : min= 174, max= 212, avg=191.80, stdev=10.43, samples=10 00:43:55.397 lat (msec) : 10=8.53%, 20=88.76%, 50=1.56%, 100=1.14% 00:43:55.397 cpu : usr=92.87%, sys=6.53%, ctx=15, majf=0, minf=1636 00:43:55.397 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:55.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:55.397 issued rwts: total=961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:55.397 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:55.397 00:43:55.397 Run status group 0 (all jobs): 00:43:55.397 READ: bw=70.7MiB/s (74.2MB/s), 23.0MiB/s-24.0MiB/s (24.1MB/s-25.2MB/s), io=357MiB (374MB), run=5006-5046msec 00:43:55.977 ----------------------------------------------------- 00:43:55.977 Suppressions used: 00:43:55.977 count bytes template 00:43:55.977 5 44 /usr/src/fio/parse.c 00:43:55.977 1 8 libtcmalloc_minimal.so 00:43:55.977 1 904 libcrypto.so 00:43:55.977 ----------------------------------------------------- 00:43:55.977 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:55.977 bdev_null0 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:55.977 [2024-11-19 21:33:29.705208] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:55.977 bdev_null1 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:55.977 bdev_null2 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.977 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:56.236 { 00:43:56.236 "params": { 00:43:56.236 "name": "Nvme$subsystem", 00:43:56.236 "trtype": "$TEST_TRANSPORT", 00:43:56.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:56.236 "adrfam": "ipv4", 00:43:56.236 "trsvcid": "$NVMF_PORT", 00:43:56.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:56.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:56.236 "hdgst": ${hdgst:-false}, 00:43:56.236 "ddgst": ${ddgst:-false} 00:43:56.236 }, 00:43:56.236 "method": "bdev_nvme_attach_controller" 00:43:56.236 } 00:43:56.236 EOF 00:43:56.236 )") 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:56.236 { 00:43:56.236 "params": { 00:43:56.236 "name": "Nvme$subsystem", 00:43:56.236 "trtype": "$TEST_TRANSPORT", 00:43:56.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:56.236 "adrfam": "ipv4", 00:43:56.236 "trsvcid": "$NVMF_PORT", 00:43:56.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:56.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:56.236 "hdgst": ${hdgst:-false}, 00:43:56.236 "ddgst": ${ddgst:-false} 00:43:56.236 }, 00:43:56.236 "method": "bdev_nvme_attach_controller" 00:43:56.236 } 00:43:56.236 EOF 00:43:56.236 )") 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:56.236 { 00:43:56.236 "params": { 00:43:56.236 "name": "Nvme$subsystem", 00:43:56.236 "trtype": "$TEST_TRANSPORT", 00:43:56.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:56.236 "adrfam": "ipv4", 00:43:56.236 "trsvcid": "$NVMF_PORT", 00:43:56.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:56.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:56.236 "hdgst": ${hdgst:-false}, 00:43:56.236 "ddgst": ${ddgst:-false} 00:43:56.236 }, 00:43:56.236 "method": "bdev_nvme_attach_controller" 00:43:56.236 } 00:43:56.236 EOF 00:43:56.236 )") 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:56.236 21:33:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:56.236 "params": { 00:43:56.236 "name": "Nvme0", 00:43:56.236 "trtype": "tcp", 00:43:56.236 "traddr": "10.0.0.2", 00:43:56.236 "adrfam": "ipv4", 00:43:56.236 "trsvcid": "4420", 00:43:56.236 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:56.236 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:56.236 "hdgst": false, 00:43:56.236 "ddgst": false 00:43:56.236 }, 00:43:56.236 "method": "bdev_nvme_attach_controller" 00:43:56.236 },{ 00:43:56.236 "params": { 00:43:56.236 "name": "Nvme1", 00:43:56.236 "trtype": "tcp", 00:43:56.236 "traddr": "10.0.0.2", 00:43:56.236 "adrfam": "ipv4", 00:43:56.236 "trsvcid": "4420", 00:43:56.236 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:56.236 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:56.237 "hdgst": false, 00:43:56.237 "ddgst": false 00:43:56.237 }, 00:43:56.237 "method": "bdev_nvme_attach_controller" 00:43:56.237 },{ 00:43:56.237 "params": { 00:43:56.237 "name": "Nvme2", 00:43:56.237 "trtype": "tcp", 00:43:56.237 "traddr": "10.0.0.2", 00:43:56.237 "adrfam": "ipv4", 00:43:56.237 "trsvcid": "4420", 00:43:56.237 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:43:56.237 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:43:56.237 "hdgst": false, 00:43:56.237 "ddgst": false 00:43:56.237 }, 00:43:56.237 "method": "bdev_nvme_attach_controller" 00:43:56.237 }' 00:43:56.237 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:56.237 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:56.237 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:43:56.237 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:56.237 21:33:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:56.495 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:56.495 ... 00:43:56.495 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:56.495 ... 00:43:56.495 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:56.495 ... 00:43:56.495 fio-3.35 00:43:56.495 Starting 24 threads 00:44:08.699 00:44:08.699 filename0: (groupid=0, jobs=1): err= 0: pid=3241728: Tue Nov 19 21:33:41 2024 00:44:08.699 read: IOPS=115, BW=462KiB/s (473kB/s)(4672KiB/10114msec) 00:44:08.699 slat (nsec): min=5799, max=85861, avg=18679.12, stdev=11999.54 00:44:08.699 clat (msec): min=17, max=252, avg=137.91, stdev=41.07 00:44:08.699 lat (msec): min=17, max=252, avg=137.93, stdev=41.07 00:44:08.699 clat percentiles (msec): 00:44:08.699 | 1.00th=[ 18], 5.00th=[ 79], 10.00th=[ 92], 20.00th=[ 108], 00:44:08.699 | 30.00th=[ 123], 40.00th=[ 133], 50.00th=[ 136], 60.00th=[ 144], 00:44:08.699 | 70.00th=[ 159], 80.00th=[ 169], 90.00th=[ 190], 95.00th=[ 207], 00:44:08.699 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 253], 99.95th=[ 253], 00:44:08.699 | 99.99th=[ 253] 00:44:08.699 bw ( KiB/s): min= 304, max= 768, per=5.56%, avg=460.80, stdev=99.53, samples=20 00:44:08.699 iops : min= 76, max= 192, avg=115.20, stdev=24.88, samples=20 00:44:08.699 lat (msec) : 20=1.37%, 50=2.57%, 100=11.64%, 250=84.08%, 500=0.34% 00:44:08.699 cpu : usr=98.26%, sys=1.26%, ctx=17, majf=0, minf=1634 00:44:08.699 IO depths : 1=0.5%, 2=1.4%, 4=7.9%, 8=77.7%, 16=12.5%, 32=0.0%, >=64=0.0% 00:44:08.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.699 complete : 0=0.0%, 4=89.1%, 8=6.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.699 issued rwts: total=1168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.699 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.699 filename0: (groupid=0, jobs=1): err= 0: pid=3241729: Tue Nov 19 21:33:41 2024 00:44:08.699 read: IOPS=79, BW=318KiB/s (325kB/s)(3200KiB/10075msec) 00:44:08.699 slat (nsec): min=8784, max=82624, avg=28025.52, stdev=16138.37 00:44:08.699 clat (msec): min=84, max=324, avg=201.24, stdev=33.12 00:44:08.699 lat (msec): min=84, max=324, avg=201.27, stdev=33.12 00:44:08.699 clat percentiles (msec): 00:44:08.699 | 1.00th=[ 120], 5.00th=[ 146], 10.00th=[ 159], 20.00th=[ 171], 00:44:08.699 | 30.00th=[ 190], 40.00th=[ 197], 50.00th=[ 203], 60.00th=[ 209], 00:44:08.699 | 70.00th=[ 218], 80.00th=[ 226], 90.00th=[ 247], 95.00th=[ 253], 00:44:08.699 | 99.00th=[ 275], 99.50th=[ 279], 99.90th=[ 326], 99.95th=[ 326], 00:44:08.699 | 99.99th=[ 326] 00:44:08.699 bw ( KiB/s): min= 251, max= 384, per=3.79%, avg=313.35, stdev=62.63, samples=20 00:44:08.699 iops : min= 62, max= 96, avg=78.30, stdev=15.70, samples=20 00:44:08.699 lat (msec) : 100=0.50%, 250=91.50%, 500=8.00% 00:44:08.699 cpu : usr=97.83%, sys=1.46%, ctx=56, majf=0, minf=1633 00:44:08.699 IO depths : 1=4.6%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.9%, 32=0.0%, >=64=0.0% 00:44:08.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.699 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.699 issued rwts: total=800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.699 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.700 filename0: (groupid=0, jobs=1): err= 0: pid=3241730: Tue Nov 19 21:33:41 2024 00:44:08.700 read: IOPS=77, BW=312KiB/s (319kB/s)(3136KiB/10056msec) 00:44:08.700 slat (nsec): min=7650, max=85806, avg=39982.28, stdev=10799.32 00:44:08.700 clat (msec): min=118, max=327, avg=204.89, stdev=33.76 00:44:08.700 lat (msec): min=118, max=328, avg=204.93, stdev=33.76 00:44:08.700 clat percentiles (msec): 00:44:08.700 | 1.00th=[ 140], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 174], 00:44:08.700 | 30.00th=[ 192], 40.00th=[ 197], 50.00th=[ 203], 60.00th=[ 207], 00:44:08.700 | 70.00th=[ 220], 80.00th=[ 226], 90.00th=[ 249], 95.00th=[ 257], 00:44:08.700 | 99.00th=[ 300], 99.50th=[ 321], 99.90th=[ 330], 99.95th=[ 330], 00:44:08.700 | 99.99th=[ 330] 00:44:08.700 bw ( KiB/s): min= 254, max= 384, per=3.71%, avg=307.10, stdev=61.42, samples=20 00:44:08.700 iops : min= 63, max= 96, avg=76.75, stdev=15.38, samples=20 00:44:08.700 lat (msec) : 250=92.47%, 500=7.53% 00:44:08.700 cpu : usr=98.26%, sys=1.21%, ctx=29, majf=0, minf=1633 00:44:08.700 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:44:08.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.700 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.700 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.700 filename0: (groupid=0, jobs=1): err= 0: pid=3241731: Tue Nov 19 21:33:41 2024 00:44:08.700 read: IOPS=85, BW=343KiB/s (351kB/s)(3456KiB/10087msec) 00:44:08.700 slat (usec): min=6, max=104, avg=52.49, stdev=20.78 00:44:08.700 clat (msec): min=96, max=230, avg=186.17, stdev=27.87 00:44:08.700 lat (msec): min=96, max=230, avg=186.23, stdev=27.88 00:44:08.700 clat percentiles (msec): 00:44:08.700 | 1.00th=[ 107], 5.00th=[ 140], 10.00th=[ 148], 20.00th=[ 167], 00:44:08.700 | 30.00th=[ 171], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 194], 00:44:08.700 | 70.00th=[ 203], 80.00th=[ 213], 90.00th=[ 220], 95.00th=[ 224], 00:44:08.700 | 99.00th=[ 228], 99.50th=[ 230], 99.90th=[ 232], 99.95th=[ 232], 00:44:08.700 | 99.99th=[ 232] 00:44:08.700 bw ( KiB/s): min= 256, max= 384, per=4.10%, avg=339.20, stdev=62.64, samples=20 00:44:08.700 iops : min= 64, max= 96, avg=84.80, stdev=15.66, samples=20 00:44:08.700 lat (msec) : 100=0.23%, 250=99.77% 00:44:08.700 cpu : usr=97.66%, sys=1.53%, ctx=44, majf=0, minf=1635 00:44:08.700 IO depths : 1=5.4%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:44:08.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.700 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.700 issued rwts: total=864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.700 filename0: (groupid=0, jobs=1): err= 0: pid=3241732: Tue Nov 19 21:33:41 2024 00:44:08.700 read: IOPS=79, BW=318KiB/s (325kB/s)(3200KiB/10072msec) 00:44:08.700 slat (usec): min=12, max=110, avg=59.48, stdev=10.69 00:44:08.700 clat (msec): min=72, max=351, avg=200.87, stdev=42.01 00:44:08.700 lat (msec): min=72, max=351, avg=200.93, stdev=42.01 00:44:08.700 clat percentiles (msec): 00:44:08.700 | 1.00th=[ 73], 5.00th=[ 126], 10.00th=[ 159], 20.00th=[ 171], 00:44:08.700 | 30.00th=[ 190], 40.00th=[ 197], 50.00th=[ 203], 60.00th=[ 209], 00:44:08.700 | 70.00th=[ 218], 80.00th=[ 230], 90.00th=[ 251], 95.00th=[ 255], 00:44:08.700 | 99.00th=[ 313], 99.50th=[ 330], 99.90th=[ 351], 99.95th=[ 351], 00:44:08.700 | 99.99th=[ 351] 00:44:08.700 bw ( KiB/s): min= 254, max= 384, per=3.79%, avg=313.50, stdev=60.95, samples=20 00:44:08.700 iops : min= 63, max= 96, avg=78.35, stdev=15.26, samples=20 00:44:08.700 lat (msec) : 100=3.00%, 250=87.00%, 500=10.00% 00:44:08.700 cpu : usr=97.74%, sys=1.48%, ctx=81, majf=0, minf=1633 00:44:08.700 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:44:08.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.700 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.700 issued rwts: total=800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.700 filename0: (groupid=0, jobs=1): err= 0: pid=3241733: Tue Nov 19 21:33:41 2024 00:44:08.700 read: IOPS=83, BW=336KiB/s (344kB/s)(3392KiB/10097msec) 00:44:08.700 slat (usec): min=7, max=108, avg=50.15, stdev=21.73 00:44:08.700 clat (msec): min=115, max=277, avg=190.09, stdev=31.20 00:44:08.700 lat (msec): min=115, max=277, avg=190.14, stdev=31.21 00:44:08.700 clat percentiles (msec): 00:44:08.700 | 1.00th=[ 116], 5.00th=[ 130], 10.00th=[ 148], 20.00th=[ 167], 00:44:08.700 | 30.00th=[ 174], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 199], 00:44:08.700 | 70.00th=[ 209], 80.00th=[ 218], 90.00th=[ 224], 95.00th=[ 236], 00:44:08.700 | 99.00th=[ 266], 99.50th=[ 275], 99.90th=[ 279], 99.95th=[ 279], 00:44:08.700 | 99.99th=[ 279] 00:44:08.700 bw ( KiB/s): min= 256, max= 384, per=4.02%, avg=332.80, stdev=61.33, samples=20 00:44:08.700 iops : min= 64, max= 96, avg=83.20, stdev=15.33, samples=20 00:44:08.700 lat (msec) : 250=98.11%, 500=1.89% 00:44:08.700 cpu : usr=98.10%, sys=1.35%, ctx=19, majf=0, minf=1634 00:44:08.700 IO depths : 1=3.8%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.7%, 32=0.0%, >=64=0.0% 00:44:08.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.700 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.700 issued rwts: total=848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.700 filename0: (groupid=0, jobs=1): err= 0: pid=3241734: Tue Nov 19 21:33:41 2024 00:44:08.700 read: IOPS=90, BW=363KiB/s (372kB/s)(3672KiB/10106msec) 00:44:08.700 slat (usec): min=6, max=103, avg=41.32, stdev=22.49 00:44:08.700 clat (msec): min=89, max=264, avg=174.80, stdev=29.54 00:44:08.700 lat (msec): min=89, max=264, avg=174.85, stdev=29.55 00:44:08.700 clat percentiles (msec): 00:44:08.700 | 1.00th=[ 91], 5.00th=[ 128], 10.00th=[ 142], 20.00th=[ 150], 00:44:08.700 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 188], 00:44:08.700 | 70.00th=[ 194], 80.00th=[ 203], 90.00th=[ 207], 95.00th=[ 213], 00:44:08.700 | 99.00th=[ 255], 99.50th=[ 259], 99.90th=[ 266], 99.95th=[ 266], 00:44:08.700 | 99.99th=[ 266] 00:44:08.700 bw ( KiB/s): min= 256, max= 512, per=4.35%, avg=360.80, stdev=71.08, samples=20 00:44:08.700 iops : min= 64, max= 128, avg=90.20, stdev=17.77, samples=20 00:44:08.700 lat (msec) : 100=1.74%, 250=96.95%, 500=1.31% 00:44:08.700 cpu : usr=97.37%, sys=1.73%, ctx=82, majf=0, minf=1636 00:44:08.700 IO depths : 1=2.4%, 2=6.6%, 4=19.0%, 8=61.9%, 16=10.1%, 32=0.0%, >=64=0.0% 00:44:08.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.700 complete : 0=0.0%, 4=92.4%, 8=2.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.700 issued rwts: total=918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.700 filename0: (groupid=0, jobs=1): err= 0: pid=3241735: Tue Nov 19 21:33:41 2024 00:44:08.700 read: IOPS=79, BW=317KiB/s (325kB/s)(3200KiB/10081msec) 00:44:08.700 slat (usec): min=12, max=100, avg=58.57, stdev=11.51 00:44:08.700 clat (msec): min=141, max=327, avg=201.08, stdev=27.70 00:44:08.700 lat (msec): min=141, max=327, avg=201.14, stdev=27.71 00:44:08.700 clat percentiles (msec): 00:44:08.700 | 1.00th=[ 142], 5.00th=[ 150], 10.00th=[ 161], 20.00th=[ 171], 00:44:08.700 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 201], 60.00th=[ 207], 00:44:08.700 | 70.00th=[ 218], 80.00th=[ 220], 90.00th=[ 236], 95.00th=[ 251], 00:44:08.700 | 99.00th=[ 257], 99.50th=[ 257], 99.90th=[ 330], 99.95th=[ 330], 00:44:08.700 | 99.99th=[ 330] 00:44:08.700 bw ( KiB/s): min= 256, max= 512, per=3.79%, avg=313.60, stdev=77.42, samples=20 00:44:08.700 iops : min= 64, max= 128, avg=78.40, stdev=19.35, samples=20 00:44:08.700 lat (msec) : 250=94.88%, 500=5.12% 00:44:08.700 cpu : usr=97.92%, sys=1.39%, ctx=60, majf=0, minf=1633 00:44:08.700 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:08.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.700 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.700 issued rwts: total=800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.700 filename1: (groupid=0, jobs=1): err= 0: pid=3241736: Tue Nov 19 21:33:41 2024 00:44:08.700 read: IOPS=79, BW=318KiB/s (325kB/s)(3200KiB/10073msec) 00:44:08.700 slat (nsec): min=9771, max=77525, avg=23678.42, stdev=9987.64 00:44:08.700 clat (msec): min=99, max=328, avg=201.25, stdev=32.37 00:44:08.700 lat (msec): min=99, max=328, avg=201.27, stdev=32.37 00:44:08.700 clat percentiles (msec): 00:44:08.700 | 1.00th=[ 128], 5.00th=[ 148], 10.00th=[ 157], 20.00th=[ 171], 00:44:08.700 | 30.00th=[ 190], 40.00th=[ 197], 50.00th=[ 203], 60.00th=[ 205], 00:44:08.700 | 70.00th=[ 218], 80.00th=[ 224], 90.00th=[ 247], 95.00th=[ 251], 00:44:08.700 | 99.00th=[ 284], 99.50th=[ 309], 99.90th=[ 330], 99.95th=[ 330], 00:44:08.700 | 99.99th=[ 330] 00:44:08.700 bw ( KiB/s): min= 256, max= 400, per=3.79%, avg=313.60, stdev=61.29, samples=20 00:44:08.700 iops : min= 64, max= 100, avg=78.40, stdev=15.32, samples=20 00:44:08.700 lat (msec) : 100=0.25%, 250=94.50%, 500=5.25% 00:44:08.700 cpu : usr=97.72%, sys=1.62%, ctx=84, majf=0, minf=1633 00:44:08.700 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:44:08.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.700 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.700 issued rwts: total=800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.700 filename1: (groupid=0, jobs=1): err= 0: pid=3241737: Tue Nov 19 21:33:41 2024 00:44:08.700 read: IOPS=79, BW=318KiB/s (325kB/s)(3200KiB/10070msec) 00:44:08.700 slat (nsec): min=12238, max=80557, avg=24314.75, stdev=15901.75 00:44:08.700 clat (msec): min=74, max=352, avg=201.18, stdev=41.26 00:44:08.700 lat (msec): min=74, max=352, avg=201.21, stdev=41.25 00:44:08.700 clat percentiles (msec): 00:44:08.700 | 1.00th=[ 75], 5.00th=[ 127], 10.00th=[ 159], 20.00th=[ 171], 00:44:08.700 | 30.00th=[ 190], 40.00th=[ 197], 50.00th=[ 203], 60.00th=[ 209], 00:44:08.700 | 70.00th=[ 218], 80.00th=[ 230], 90.00th=[ 249], 95.00th=[ 255], 00:44:08.700 | 99.00th=[ 317], 99.50th=[ 326], 99.90th=[ 355], 99.95th=[ 355], 00:44:08.700 | 99.99th=[ 355] 00:44:08.701 bw ( KiB/s): min= 256, max= 384, per=3.79%, avg=313.60, stdev=60.85, samples=20 00:44:08.701 iops : min= 64, max= 96, avg=78.40, stdev=15.21, samples=20 00:44:08.701 lat (msec) : 100=3.25%, 250=87.00%, 500=9.75% 00:44:08.701 cpu : usr=98.25%, sys=1.25%, ctx=21, majf=0, minf=1635 00:44:08.701 IO depths : 1=3.6%, 2=9.9%, 4=25.0%, 8=52.6%, 16=8.9%, 32=0.0%, >=64=0.0% 00:44:08.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.701 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.701 issued rwts: total=800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.701 filename1: (groupid=0, jobs=1): err= 0: pid=3241738: Tue Nov 19 21:33:41 2024 00:44:08.701 read: IOPS=107, BW=428KiB/s (439kB/s)(4320KiB/10085msec) 00:44:08.701 slat (nsec): min=8762, max=67926, avg=20862.99, stdev=12636.57 00:44:08.701 clat (msec): min=98, max=240, avg=148.90, stdev=22.71 00:44:08.701 lat (msec): min=98, max=240, avg=148.93, stdev=22.71 00:44:08.701 clat percentiles (msec): 00:44:08.701 | 1.00th=[ 105], 5.00th=[ 120], 10.00th=[ 125], 20.00th=[ 131], 00:44:08.701 | 30.00th=[ 134], 40.00th=[ 140], 50.00th=[ 144], 60.00th=[ 148], 00:44:08.701 | 70.00th=[ 161], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 190], 00:44:08.701 | 99.00th=[ 211], 99.50th=[ 230], 99.90th=[ 241], 99.95th=[ 241], 00:44:08.701 | 99.99th=[ 241] 00:44:08.701 bw ( KiB/s): min= 256, max= 512, per=5.14%, avg=425.60, stdev=64.71, samples=20 00:44:08.701 iops : min= 64, max= 128, avg=106.40, stdev=16.18, samples=20 00:44:08.701 lat (msec) : 100=0.19%, 250=99.81% 00:44:08.701 cpu : usr=98.39%, sys=1.09%, ctx=26, majf=0, minf=1631 00:44:08.701 IO depths : 1=1.3%, 2=3.8%, 4=13.6%, 8=70.0%, 16=11.3%, 32=0.0%, >=64=0.0% 00:44:08.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.701 complete : 0=0.0%, 4=90.9%, 8=3.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.701 issued rwts: total=1080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.701 filename1: (groupid=0, jobs=1): err= 0: pid=3241739: Tue Nov 19 21:33:41 2024 00:44:08.701 read: IOPS=83, BW=332KiB/s (340kB/s)(3328KiB/10014msec) 00:44:08.701 slat (nsec): min=6107, max=75938, avg=29222.61, stdev=10209.00 00:44:08.701 clat (msec): min=114, max=247, avg=192.31, stdev=24.85 00:44:08.701 lat (msec): min=114, max=247, avg=192.34, stdev=24.85 00:44:08.701 clat percentiles (msec): 00:44:08.701 | 1.00th=[ 140], 5.00th=[ 150], 10.00th=[ 159], 20.00th=[ 169], 00:44:08.701 | 30.00th=[ 180], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 201], 00:44:08.701 | 70.00th=[ 207], 80.00th=[ 218], 90.00th=[ 222], 95.00th=[ 226], 00:44:08.701 | 99.00th=[ 247], 99.50th=[ 247], 99.90th=[ 247], 99.95th=[ 247], 00:44:08.701 | 99.99th=[ 247] 00:44:08.701 bw ( KiB/s): min= 256, max= 512, per=3.94%, avg=326.40, stdev=77.42, samples=20 00:44:08.701 iops : min= 64, max= 128, avg=81.60, stdev=19.35, samples=20 00:44:08.701 lat (msec) : 250=100.00% 00:44:08.701 cpu : usr=98.13%, sys=1.39%, ctx=52, majf=0, minf=1633 00:44:08.701 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:08.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.701 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.701 issued rwts: total=832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.701 filename1: (groupid=0, jobs=1): err= 0: pid=3241740: Tue Nov 19 21:33:41 2024 00:44:08.701 read: IOPS=79, BW=318KiB/s (325kB/s)(3200KiB/10077msec) 00:44:08.701 slat (nsec): min=11119, max=89989, avg=51522.83, stdev=15461.29 00:44:08.701 clat (msec): min=141, max=324, avg=201.07, stdev=27.97 00:44:08.701 lat (msec): min=141, max=324, avg=201.12, stdev=27.97 00:44:08.701 clat percentiles (msec): 00:44:08.701 | 1.00th=[ 142], 5.00th=[ 159], 10.00th=[ 159], 20.00th=[ 174], 00:44:08.701 | 30.00th=[ 190], 40.00th=[ 197], 50.00th=[ 201], 60.00th=[ 207], 00:44:08.701 | 70.00th=[ 213], 80.00th=[ 220], 90.00th=[ 245], 95.00th=[ 251], 00:44:08.701 | 99.00th=[ 257], 99.50th=[ 257], 99.90th=[ 326], 99.95th=[ 326], 00:44:08.701 | 99.99th=[ 326] 00:44:08.701 bw ( KiB/s): min= 256, max= 384, per=3.79%, avg=313.60, stdev=65.33, samples=20 00:44:08.701 iops : min= 64, max= 96, avg=78.40, stdev=16.33, samples=20 00:44:08.701 lat (msec) : 250=94.00%, 500=6.00% 00:44:08.701 cpu : usr=97.25%, sys=1.81%, ctx=169, majf=0, minf=1633 00:44:08.701 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:44:08.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.701 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.701 issued rwts: total=800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.701 filename1: (groupid=0, jobs=1): err= 0: pid=3241741: Tue Nov 19 21:33:41 2024 00:44:08.701 read: IOPS=83, BW=336KiB/s (344kB/s)(3392KiB/10096msec) 00:44:08.701 slat (usec): min=11, max=109, avg=50.88, stdev=20.68 00:44:08.701 clat (msec): min=114, max=296, avg=189.47, stdev=32.68 00:44:08.701 lat (msec): min=114, max=296, avg=189.52, stdev=32.69 00:44:08.701 clat percentiles (msec): 00:44:08.701 | 1.00th=[ 116], 5.00th=[ 140], 10.00th=[ 146], 20.00th=[ 161], 00:44:08.701 | 30.00th=[ 169], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 203], 00:44:08.701 | 70.00th=[ 205], 80.00th=[ 220], 90.00th=[ 226], 95.00th=[ 241], 00:44:08.701 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 296], 99.95th=[ 296], 00:44:08.701 | 99.99th=[ 296] 00:44:08.701 bw ( KiB/s): min= 256, max= 432, per=4.05%, avg=335.20, stdev=62.83, samples=20 00:44:08.701 iops : min= 64, max= 108, avg=83.80, stdev=15.71, samples=20 00:44:08.701 lat (msec) : 250=97.17%, 500=2.83% 00:44:08.701 cpu : usr=97.13%, sys=1.67%, ctx=95, majf=0, minf=1634 00:44:08.701 IO depths : 1=3.1%, 2=9.0%, 4=23.9%, 8=54.6%, 16=9.4%, 32=0.0%, >=64=0.0% 00:44:08.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.701 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.701 issued rwts: total=848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.701 filename1: (groupid=0, jobs=1): err= 0: pid=3241742: Tue Nov 19 21:33:41 2024 00:44:08.701 read: IOPS=77, BW=312KiB/s (319kB/s)(3136KiB/10058msec) 00:44:08.701 slat (nsec): min=8334, max=73249, avg=37799.50, stdev=8089.49 00:44:08.701 clat (msec): min=108, max=343, avg=204.93, stdev=31.49 00:44:08.701 lat (msec): min=108, max=343, avg=204.97, stdev=31.49 00:44:08.701 clat percentiles (msec): 00:44:08.701 | 1.00th=[ 144], 5.00th=[ 161], 10.00th=[ 169], 20.00th=[ 186], 00:44:08.701 | 30.00th=[ 192], 40.00th=[ 197], 50.00th=[ 203], 60.00th=[ 205], 00:44:08.701 | 70.00th=[ 218], 80.00th=[ 224], 90.00th=[ 247], 95.00th=[ 257], 00:44:08.701 | 99.00th=[ 309], 99.50th=[ 330], 99.90th=[ 342], 99.95th=[ 342], 00:44:08.701 | 99.99th=[ 342] 00:44:08.701 bw ( KiB/s): min= 256, max= 384, per=3.71%, avg=307.20, stdev=62.85, samples=20 00:44:08.701 iops : min= 64, max= 96, avg=76.80, stdev=15.71, samples=20 00:44:08.701 lat (msec) : 250=94.13%, 500=5.87% 00:44:08.701 cpu : usr=97.15%, sys=1.96%, ctx=49, majf=0, minf=1633 00:44:08.701 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:44:08.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.701 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.701 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.701 filename1: (groupid=0, jobs=1): err= 0: pid=3241743: Tue Nov 19 21:33:41 2024 00:44:08.701 read: IOPS=80, BW=323KiB/s (331kB/s)(3264KiB/10111msec) 00:44:08.701 slat (nsec): min=5810, max=99398, avg=59933.49, stdev=15392.43 00:44:08.701 clat (msec): min=115, max=327, avg=197.74, stdev=31.59 00:44:08.701 lat (msec): min=115, max=327, avg=197.80, stdev=31.60 00:44:08.701 clat percentiles (msec): 00:44:08.701 | 1.00th=[ 116], 5.00th=[ 142], 10.00th=[ 159], 20.00th=[ 169], 00:44:08.701 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 199], 60.00th=[ 207], 00:44:08.701 | 70.00th=[ 213], 80.00th=[ 220], 90.00th=[ 236], 95.00th=[ 251], 00:44:08.701 | 99.00th=[ 257], 99.50th=[ 257], 99.90th=[ 330], 99.95th=[ 330], 00:44:08.701 | 99.99th=[ 330] 00:44:08.701 bw ( KiB/s): min= 256, max= 384, per=3.87%, avg=320.00, stdev=65.66, samples=20 00:44:08.701 iops : min= 64, max= 96, avg=80.00, stdev=16.42, samples=20 00:44:08.701 lat (msec) : 250=95.34%, 500=4.66% 00:44:08.701 cpu : usr=97.94%, sys=1.44%, ctx=21, majf=0, minf=1636 00:44:08.701 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:44:08.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.701 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.701 issued rwts: total=816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.701 filename2: (groupid=0, jobs=1): err= 0: pid=3241744: Tue Nov 19 21:33:41 2024 00:44:08.701 read: IOPS=89, BW=360KiB/s (368kB/s)(3632KiB/10093msec) 00:44:08.701 slat (usec): min=9, max=117, avg=43.51, stdev=24.31 00:44:08.701 clat (msec): min=106, max=275, avg=176.51, stdev=26.52 00:44:08.701 lat (msec): min=106, max=275, avg=176.56, stdev=26.54 00:44:08.701 clat percentiles (msec): 00:44:08.701 | 1.00th=[ 116], 5.00th=[ 134], 10.00th=[ 144], 20.00th=[ 157], 00:44:08.701 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 176], 60.00th=[ 190], 00:44:08.701 | 70.00th=[ 194], 80.00th=[ 201], 90.00th=[ 211], 95.00th=[ 213], 00:44:08.701 | 99.00th=[ 222], 99.50th=[ 264], 99.90th=[ 275], 99.95th=[ 275], 00:44:08.701 | 99.99th=[ 275] 00:44:08.701 bw ( KiB/s): min= 256, max= 464, per=4.35%, avg=360.80, stdev=62.83, samples=20 00:44:08.701 iops : min= 64, max= 116, avg=90.20, stdev=15.71, samples=20 00:44:08.701 lat (msec) : 250=99.34%, 500=0.66% 00:44:08.701 cpu : usr=98.04%, sys=1.43%, ctx=21, majf=0, minf=1632 00:44:08.701 IO depths : 1=2.6%, 2=6.9%, 4=19.1%, 8=61.5%, 16=9.9%, 32=0.0%, >=64=0.0% 00:44:08.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.701 complete : 0=0.0%, 4=92.4%, 8=2.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.701 issued rwts: total=908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.702 filename2: (groupid=0, jobs=1): err= 0: pid=3241745: Tue Nov 19 21:33:41 2024 00:44:08.702 read: IOPS=79, BW=317KiB/s (324kB/s)(3192KiB/10074msec) 00:44:08.702 slat (nsec): min=10956, max=99095, avg=56948.93, stdev=10545.04 00:44:08.702 clat (msec): min=77, max=351, avg=201.45, stdev=36.82 00:44:08.702 lat (msec): min=77, max=351, avg=201.51, stdev=36.82 00:44:08.702 clat percentiles (msec): 00:44:08.702 | 1.00th=[ 78], 5.00th=[ 142], 10.00th=[ 159], 20.00th=[ 174], 00:44:08.702 | 30.00th=[ 192], 40.00th=[ 197], 50.00th=[ 203], 60.00th=[ 207], 00:44:08.702 | 70.00th=[ 213], 80.00th=[ 222], 90.00th=[ 249], 95.00th=[ 255], 00:44:08.702 | 99.00th=[ 313], 99.50th=[ 347], 99.90th=[ 351], 99.95th=[ 351], 00:44:08.702 | 99.99th=[ 351] 00:44:08.702 bw ( KiB/s): min= 256, max= 384, per=3.77%, avg=312.80, stdev=63.04, samples=20 00:44:08.702 iops : min= 64, max= 96, avg=78.20, stdev=15.76, samples=20 00:44:08.702 lat (msec) : 100=1.75%, 250=90.23%, 500=8.02% 00:44:08.702 cpu : usr=97.59%, sys=1.62%, ctx=40, majf=0, minf=1631 00:44:08.702 IO depths : 1=3.6%, 2=9.9%, 4=25.1%, 8=52.6%, 16=8.8%, 32=0.0%, >=64=0.0% 00:44:08.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.702 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.702 issued rwts: total=798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.702 filename2: (groupid=0, jobs=1): err= 0: pid=3241746: Tue Nov 19 21:33:41 2024 00:44:08.702 read: IOPS=77, BW=312KiB/s (319kB/s)(3136KiB/10055msec) 00:44:08.702 slat (nsec): min=11156, max=77432, avg=36792.41, stdev=9386.27 00:44:08.702 clat (msec): min=108, max=340, avg=204.90, stdev=34.27 00:44:08.702 lat (msec): min=108, max=340, avg=204.93, stdev=34.27 00:44:08.702 clat percentiles (msec): 00:44:08.702 | 1.00th=[ 128], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 174], 00:44:08.702 | 30.00th=[ 192], 40.00th=[ 197], 50.00th=[ 203], 60.00th=[ 207], 00:44:08.702 | 70.00th=[ 220], 80.00th=[ 226], 90.00th=[ 249], 95.00th=[ 257], 00:44:08.702 | 99.00th=[ 309], 99.50th=[ 330], 99.90th=[ 342], 99.95th=[ 342], 00:44:08.702 | 99.99th=[ 342] 00:44:08.702 bw ( KiB/s): min= 256, max= 384, per=3.71%, avg=307.20, stdev=61.33, samples=20 00:44:08.702 iops : min= 64, max= 96, avg=76.80, stdev=15.33, samples=20 00:44:08.702 lat (msec) : 250=92.22%, 500=7.78% 00:44:08.702 cpu : usr=98.20%, sys=1.29%, ctx=30, majf=0, minf=1633 00:44:08.702 IO depths : 1=3.2%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.3%, 32=0.0%, >=64=0.0% 00:44:08.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.702 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.702 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.702 filename2: (groupid=0, jobs=1): err= 0: pid=3241747: Tue Nov 19 21:33:41 2024 00:44:08.702 read: IOPS=95, BW=381KiB/s (390kB/s)(3840KiB/10089msec) 00:44:08.702 slat (usec): min=5, max=122, avg=25.84, stdev=17.46 00:44:08.702 clat (msec): min=98, max=252, avg=167.71, stdev=31.48 00:44:08.702 lat (msec): min=98, max=252, avg=167.74, stdev=31.48 00:44:08.702 clat percentiles (msec): 00:44:08.702 | 1.00th=[ 105], 5.00th=[ 117], 10.00th=[ 126], 20.00th=[ 140], 00:44:08.702 | 30.00th=[ 146], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 176], 00:44:08.702 | 70.00th=[ 192], 80.00th=[ 201], 90.00th=[ 205], 95.00th=[ 213], 00:44:08.702 | 99.00th=[ 249], 99.50th=[ 251], 99.90th=[ 253], 99.95th=[ 253], 00:44:08.702 | 99.99th=[ 253] 00:44:08.702 bw ( KiB/s): min= 256, max= 480, per=4.56%, avg=377.60, stdev=48.53, samples=20 00:44:08.702 iops : min= 64, max= 120, avg=94.40, stdev=12.13, samples=20 00:44:08.702 lat (msec) : 100=0.21%, 250=98.96%, 500=0.83% 00:44:08.702 cpu : usr=97.08%, sys=1.82%, ctx=244, majf=0, minf=1631 00:44:08.702 IO depths : 1=2.7%, 2=6.7%, 4=17.8%, 8=62.8%, 16=10.0%, 32=0.0%, >=64=0.0% 00:44:08.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.702 complete : 0=0.0%, 4=92.0%, 8=2.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.702 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.702 filename2: (groupid=0, jobs=1): err= 0: pid=3241748: Tue Nov 19 21:33:41 2024 00:44:08.702 read: IOPS=95, BW=380KiB/s (389kB/s)(3848KiB/10119msec) 00:44:08.702 slat (usec): min=6, max=105, avg=37.12, stdev=22.28 00:44:08.702 clat (msec): min=36, max=259, avg=166.93, stdev=34.14 00:44:08.702 lat (msec): min=36, max=259, avg=166.97, stdev=34.15 00:44:08.702 clat percentiles (msec): 00:44:08.702 | 1.00th=[ 37], 5.00th=[ 103], 10.00th=[ 131], 20.00th=[ 144], 00:44:08.702 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 169], 60.00th=[ 176], 00:44:08.702 | 70.00th=[ 190], 80.00th=[ 197], 90.00th=[ 203], 95.00th=[ 207], 00:44:08.702 | 99.00th=[ 241], 99.50th=[ 257], 99.90th=[ 259], 99.95th=[ 259], 00:44:08.702 | 99.99th=[ 259] 00:44:08.702 bw ( KiB/s): min= 256, max= 512, per=4.57%, avg=378.40, stdev=87.83, samples=20 00:44:08.702 iops : min= 64, max= 128, avg=94.60, stdev=21.96, samples=20 00:44:08.702 lat (msec) : 50=1.66%, 100=3.22%, 250=94.49%, 500=0.62% 00:44:08.702 cpu : usr=97.78%, sys=1.54%, ctx=36, majf=0, minf=1634 00:44:08.702 IO depths : 1=2.0%, 2=5.7%, 4=17.4%, 8=64.3%, 16=10.6%, 32=0.0%, >=64=0.0% 00:44:08.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.702 complete : 0=0.0%, 4=91.9%, 8=2.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.702 issued rwts: total=962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.702 filename2: (groupid=0, jobs=1): err= 0: pid=3241749: Tue Nov 19 21:33:41 2024 00:44:08.702 read: IOPS=77, BW=312KiB/s (319kB/s)(3136KiB/10063msec) 00:44:08.702 slat (nsec): min=13545, max=78953, avg=24934.07, stdev=7892.63 00:44:08.702 clat (msec): min=148, max=341, avg=205.11, stdev=33.40 00:44:08.702 lat (msec): min=148, max=341, avg=205.13, stdev=33.40 00:44:08.702 clat percentiles (msec): 00:44:08.702 | 1.00th=[ 148], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 184], 00:44:08.702 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 203], 60.00th=[ 211], 00:44:08.702 | 70.00th=[ 218], 80.00th=[ 222], 90.00th=[ 247], 95.00th=[ 253], 00:44:08.702 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 342], 99.95th=[ 342], 00:44:08.702 | 99.99th=[ 342] 00:44:08.702 bw ( KiB/s): min= 256, max= 384, per=3.71%, avg=307.20, stdev=62.85, samples=20 00:44:08.702 iops : min= 64, max= 96, avg=76.80, stdev=15.71, samples=20 00:44:08.702 lat (msec) : 250=91.58%, 500=8.42% 00:44:08.702 cpu : usr=97.63%, sys=1.62%, ctx=81, majf=0, minf=1633 00:44:08.702 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:44:08.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.702 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.702 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.702 filename2: (groupid=0, jobs=1): err= 0: pid=3241750: Tue Nov 19 21:33:41 2024 00:44:08.702 read: IOPS=79, BW=318KiB/s (325kB/s)(3200KiB/10076msec) 00:44:08.702 slat (usec): min=12, max=108, avg=47.77, stdev=17.15 00:44:08.702 clat (msec): min=113, max=333, avg=201.13, stdev=35.57 00:44:08.702 lat (msec): min=113, max=333, avg=201.17, stdev=35.56 00:44:08.702 clat percentiles (msec): 00:44:08.702 | 1.00th=[ 118], 5.00th=[ 142], 10.00th=[ 155], 20.00th=[ 169], 00:44:08.702 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 201], 60.00th=[ 209], 00:44:08.702 | 70.00th=[ 218], 80.00th=[ 224], 90.00th=[ 249], 95.00th=[ 257], 00:44:08.702 | 99.00th=[ 288], 99.50th=[ 305], 99.90th=[ 334], 99.95th=[ 334], 00:44:08.702 | 99.99th=[ 334] 00:44:08.702 bw ( KiB/s): min= 256, max= 496, per=3.79%, avg=313.60, stdev=74.94, samples=20 00:44:08.702 iops : min= 64, max= 124, avg=78.40, stdev=18.73, samples=20 00:44:08.702 lat (msec) : 250=90.00%, 500=10.00% 00:44:08.702 cpu : usr=98.49%, sys=0.99%, ctx=17, majf=0, minf=1633 00:44:08.702 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:44:08.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.702 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.702 issued rwts: total=800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.702 filename2: (groupid=0, jobs=1): err= 0: pid=3241751: Tue Nov 19 21:33:41 2024 00:44:08.702 read: IOPS=116, BW=464KiB/s (476kB/s)(4696KiB/10110msec) 00:44:08.702 slat (usec): min=4, max=115, avg=18.54, stdev=12.64 00:44:08.702 clat (msec): min=21, max=248, avg=136.79, stdev=33.65 00:44:08.702 lat (msec): min=21, max=248, avg=136.81, stdev=33.65 00:44:08.702 clat percentiles (msec): 00:44:08.702 | 1.00th=[ 23], 5.00th=[ 72], 10.00th=[ 109], 20.00th=[ 122], 00:44:08.702 | 30.00th=[ 128], 40.00th=[ 133], 50.00th=[ 134], 60.00th=[ 140], 00:44:08.702 | 70.00th=[ 144], 80.00th=[ 159], 90.00th=[ 176], 95.00th=[ 192], 00:44:08.702 | 99.00th=[ 232], 99.50th=[ 236], 99.90th=[ 249], 99.95th=[ 249], 00:44:08.702 | 99.99th=[ 249] 00:44:08.702 bw ( KiB/s): min= 352, max= 640, per=5.65%, avg=467.20, stdev=72.97, samples=20 00:44:08.702 iops : min= 88, max= 160, avg=116.80, stdev=18.24, samples=20 00:44:08.702 lat (msec) : 50=2.73%, 100=6.47%, 250=90.80% 00:44:08.702 cpu : usr=98.18%, sys=1.37%, ctx=13, majf=0, minf=1632 00:44:08.702 IO depths : 1=0.4%, 2=1.3%, 4=8.3%, 8=77.6%, 16=12.4%, 32=0.0%, >=64=0.0% 00:44:08.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.702 complete : 0=0.0%, 4=89.3%, 8=5.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.702 issued rwts: total=1174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:08.702 00:44:08.702 Run status group 0 (all jobs): 00:44:08.702 READ: bw=8266KiB/s (8465kB/s), 312KiB/s-464KiB/s (319kB/s-476kB/s), io=81.7MiB (85.7MB), run=10014-10119msec 00:44:09.268 ----------------------------------------------------- 00:44:09.268 Suppressions used: 00:44:09.268 count bytes template 00:44:09.268 45 402 /usr/src/fio/parse.c 00:44:09.268 1 8 libtcmalloc_minimal.so 00:44:09.268 1 904 libcrypto.so 00:44:09.268 ----------------------------------------------------- 00:44:09.268 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:44:09.268 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.269 bdev_null0 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.269 [2024-11-19 21:33:42.917835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.269 bdev_null1 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:09.269 { 00:44:09.269 "params": { 00:44:09.269 "name": "Nvme$subsystem", 00:44:09.269 "trtype": "$TEST_TRANSPORT", 00:44:09.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:09.269 "adrfam": "ipv4", 00:44:09.269 "trsvcid": "$NVMF_PORT", 00:44:09.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:09.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:09.269 "hdgst": ${hdgst:-false}, 00:44:09.269 "ddgst": ${ddgst:-false} 00:44:09.269 }, 00:44:09.269 "method": "bdev_nvme_attach_controller" 00:44:09.269 } 00:44:09.269 EOF 00:44:09.269 )") 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:09.269 { 00:44:09.269 "params": { 00:44:09.269 "name": "Nvme$subsystem", 00:44:09.269 "trtype": "$TEST_TRANSPORT", 00:44:09.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:09.269 "adrfam": "ipv4", 00:44:09.269 "trsvcid": "$NVMF_PORT", 00:44:09.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:09.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:09.269 "hdgst": ${hdgst:-false}, 00:44:09.269 "ddgst": ${ddgst:-false} 00:44:09.269 }, 00:44:09.269 "method": "bdev_nvme_attach_controller" 00:44:09.269 } 00:44:09.269 EOF 00:44:09.269 )") 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:09.269 "params": { 00:44:09.269 "name": "Nvme0", 00:44:09.269 "trtype": "tcp", 00:44:09.269 "traddr": "10.0.0.2", 00:44:09.269 "adrfam": "ipv4", 00:44:09.269 "trsvcid": "4420", 00:44:09.269 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:09.269 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:09.269 "hdgst": false, 00:44:09.269 "ddgst": false 00:44:09.269 }, 00:44:09.269 "method": "bdev_nvme_attach_controller" 00:44:09.269 },{ 00:44:09.269 "params": { 00:44:09.269 "name": "Nvme1", 00:44:09.269 "trtype": "tcp", 00:44:09.269 "traddr": "10.0.0.2", 00:44:09.269 "adrfam": "ipv4", 00:44:09.269 "trsvcid": "4420", 00:44:09.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:09.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:09.269 "hdgst": false, 00:44:09.269 "ddgst": false 00:44:09.269 }, 00:44:09.269 "method": "bdev_nvme_attach_controller" 00:44:09.269 }' 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:09.269 21:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:09.528 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:09.528 ... 00:44:09.528 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:09.528 ... 00:44:09.528 fio-3.35 00:44:09.528 Starting 4 threads 00:44:16.085 00:44:16.085 filename0: (groupid=0, jobs=1): err= 0: pid=3243312: Tue Nov 19 21:33:49 2024 00:44:16.085 read: IOPS=1428, BW=11.2MiB/s (11.7MB/s)(55.9MiB/5004msec) 00:44:16.085 slat (nsec): min=6715, max=81725, avg=25598.38, stdev=7713.42 00:44:16.085 clat (usec): min=987, max=13225, avg=5504.35, stdev=449.82 00:44:16.085 lat (usec): min=1012, max=13273, avg=5529.95, stdev=449.93 00:44:16.085 clat percentiles (usec): 00:44:16.086 | 1.00th=[ 4686], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5276], 00:44:16.086 | 30.00th=[ 5342], 40.00th=[ 5407], 50.00th=[ 5473], 60.00th=[ 5538], 00:44:16.086 | 70.00th=[ 5669], 80.00th=[ 5735], 90.00th=[ 5866], 95.00th=[ 5997], 00:44:16.086 | 99.00th=[ 6259], 99.50th=[ 6849], 99.90th=[12911], 99.95th=[12911], 00:44:16.086 | 99.99th=[13173] 00:44:16.086 bw ( KiB/s): min=10752, max=11904, per=25.00%, avg=11427.20, stdev=368.56, samples=10 00:44:16.086 iops : min= 1344, max= 1488, avg=1428.40, stdev=46.07, samples=10 00:44:16.086 lat (usec) : 1000=0.01% 00:44:16.086 lat (msec) : 2=0.06%, 4=0.25%, 10=99.57%, 20=0.11% 00:44:16.086 cpu : usr=94.64%, sys=4.18%, ctx=118, majf=0, minf=1634 00:44:16.086 IO depths : 1=1.0%, 2=18.7%, 4=55.0%, 8=25.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:16.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.086 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.086 issued rwts: total=7150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.086 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:16.086 filename0: (groupid=0, jobs=1): err= 0: pid=3243314: Tue Nov 19 21:33:49 2024 00:44:16.086 read: IOPS=1427, BW=11.2MiB/s (11.7MB/s)(55.8MiB/5001msec) 00:44:16.086 slat (nsec): min=6512, max=69645, avg=25970.48, stdev=10346.43 00:44:16.086 clat (usec): min=1043, max=11134, avg=5503.05, stdev=637.46 00:44:16.086 lat (usec): min=1061, max=11155, avg=5529.02, stdev=637.58 00:44:16.086 clat percentiles (usec): 00:44:16.086 | 1.00th=[ 2868], 5.00th=[ 5080], 10.00th=[ 5145], 20.00th=[ 5276], 00:44:16.086 | 30.00th=[ 5342], 40.00th=[ 5407], 50.00th=[ 5473], 60.00th=[ 5538], 00:44:16.086 | 70.00th=[ 5669], 80.00th=[ 5735], 90.00th=[ 5866], 95.00th=[ 5997], 00:44:16.086 | 99.00th=[ 8717], 99.50th=[ 9372], 99.90th=[10814], 99.95th=[10814], 00:44:16.086 | 99.99th=[11076] 00:44:16.086 bw ( KiB/s): min=10752, max=11920, per=24.98%, avg=11415.78, stdev=413.72, samples=9 00:44:16.086 iops : min= 1344, max= 1490, avg=1426.89, stdev=51.80, samples=9 00:44:16.086 lat (msec) : 2=0.34%, 4=0.88%, 10=98.66%, 20=0.13% 00:44:16.086 cpu : usr=95.24%, sys=4.16%, ctx=10, majf=0, minf=1632 00:44:16.086 IO depths : 1=1.0%, 2=20.0%, 4=54.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:16.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.086 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.086 issued rwts: total=7140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.086 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:16.086 filename1: (groupid=0, jobs=1): err= 0: pid=3243315: Tue Nov 19 21:33:49 2024 00:44:16.086 read: IOPS=1427, BW=11.1MiB/s (11.7MB/s)(55.8MiB/5002msec) 00:44:16.086 slat (nsec): min=6346, max=69769, avg=25629.45, stdev=10510.81 00:44:16.086 clat (usec): min=929, max=13089, avg=5508.28, stdev=729.24 00:44:16.086 lat (usec): min=946, max=13108, avg=5533.91, stdev=729.63 00:44:16.086 clat percentiles (usec): 00:44:16.086 | 1.00th=[ 2114], 5.00th=[ 5080], 10.00th=[ 5145], 20.00th=[ 5276], 00:44:16.086 | 30.00th=[ 5342], 40.00th=[ 5407], 50.00th=[ 5473], 60.00th=[ 5538], 00:44:16.086 | 70.00th=[ 5669], 80.00th=[ 5735], 90.00th=[ 5866], 95.00th=[ 5997], 00:44:16.086 | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[10290], 99.95th=[10421], 00:44:16.086 | 99.99th=[13042] 00:44:16.086 bw ( KiB/s): min=10768, max=11824, per=24.93%, avg=11395.56, stdev=386.64, samples=9 00:44:16.086 iops : min= 1346, max= 1478, avg=1424.44, stdev=48.33, samples=9 00:44:16.086 lat (usec) : 1000=0.01% 00:44:16.086 lat (msec) : 2=0.90%, 4=0.56%, 10=98.28%, 20=0.25% 00:44:16.086 cpu : usr=95.02%, sys=4.40%, ctx=7, majf=0, minf=1634 00:44:16.086 IO depths : 1=0.9%, 2=17.2%, 4=57.0%, 8=24.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:16.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.086 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.086 issued rwts: total=7138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.086 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:16.086 filename1: (groupid=0, jobs=1): err= 0: pid=3243316: Tue Nov 19 21:33:49 2024 00:44:16.086 read: IOPS=1431, BW=11.2MiB/s (11.7MB/s)(55.9MiB/5002msec) 00:44:16.086 slat (nsec): min=6374, max=70885, avg=24594.95, stdev=9770.53 00:44:16.086 clat (usec): min=1204, max=12194, avg=5496.77, stdev=461.26 00:44:16.086 lat (usec): min=1222, max=12232, avg=5521.37, stdev=461.40 00:44:16.086 clat percentiles (usec): 00:44:16.086 | 1.00th=[ 4424], 5.00th=[ 5080], 10.00th=[ 5145], 20.00th=[ 5276], 00:44:16.086 | 30.00th=[ 5342], 40.00th=[ 5407], 50.00th=[ 5473], 60.00th=[ 5538], 00:44:16.086 | 70.00th=[ 5604], 80.00th=[ 5735], 90.00th=[ 5866], 95.00th=[ 5997], 00:44:16.086 | 99.00th=[ 6259], 99.50th=[ 7832], 99.90th=[11863], 99.95th=[11863], 00:44:16.086 | 99.99th=[12256] 00:44:16.086 bw ( KiB/s): min=10800, max=11904, per=25.05%, avg=11447.11, stdev=399.29, samples=9 00:44:16.086 iops : min= 1350, max= 1488, avg=1430.89, stdev=49.91, samples=9 00:44:16.086 lat (msec) : 2=0.11%, 4=0.43%, 10=99.33%, 20=0.13% 00:44:16.086 cpu : usr=94.98%, sys=4.48%, ctx=7, majf=0, minf=1634 00:44:16.086 IO depths : 1=1.3%, 2=19.4%, 4=54.5%, 8=24.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:16.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.086 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:16.086 issued rwts: total=7159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:16.086 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:16.086 00:44:16.086 Run status group 0 (all jobs): 00:44:16.086 READ: bw=44.6MiB/s (46.8MB/s), 11.1MiB/s-11.2MiB/s (11.7MB/s-11.7MB/s), io=223MiB (234MB), run=5001-5004msec 00:44:17.021 ----------------------------------------------------- 00:44:17.021 Suppressions used: 00:44:17.021 count bytes template 00:44:17.021 6 52 /usr/src/fio/parse.c 00:44:17.021 1 8 libtcmalloc_minimal.so 00:44:17.021 1 904 libcrypto.so 00:44:17.021 ----------------------------------------------------- 00:44:17.021 00:44:17.021 21:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:44:17.021 21:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:17.021 21:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:17.021 21:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:17.021 21:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:17.021 21:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:17.021 21:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:17.021 21:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:17.021 21:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:17.021 21:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:17.021 21:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:17.022 21:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:17.022 21:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:17.022 21:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:17.022 21:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:17.022 21:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:17.022 21:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:17.022 21:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:17.022 21:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:17.022 21:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:17.022 21:33:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:17.022 21:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:17.022 21:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:17.022 21:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:17.022 00:44:17.022 real 0m28.494s 00:44:17.022 user 4m38.259s 00:44:17.022 sys 0m6.935s 00:44:17.022 21:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:17.022 21:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:17.022 ************************************ 00:44:17.022 END TEST fio_dif_rand_params 00:44:17.022 ************************************ 00:44:17.022 21:33:50 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:44:17.022 21:33:50 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:17.022 21:33:50 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:17.022 21:33:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:17.022 ************************************ 00:44:17.022 START TEST fio_dif_digest 00:44:17.022 ************************************ 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:17.022 bdev_null0 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:17.022 [2024-11-19 21:33:50.723708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:17.022 { 00:44:17.022 "params": { 00:44:17.022 "name": "Nvme$subsystem", 00:44:17.022 "trtype": "$TEST_TRANSPORT", 00:44:17.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:17.022 "adrfam": "ipv4", 00:44:17.022 "trsvcid": "$NVMF_PORT", 00:44:17.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:17.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:17.022 "hdgst": ${hdgst:-false}, 00:44:17.022 "ddgst": ${ddgst:-false} 00:44:17.022 }, 00:44:17.022 "method": "bdev_nvme_attach_controller" 00:44:17.022 } 00:44:17.022 EOF 00:44:17.022 )") 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:17.022 "params": { 00:44:17.022 "name": "Nvme0", 00:44:17.022 "trtype": "tcp", 00:44:17.022 "traddr": "10.0.0.2", 00:44:17.022 "adrfam": "ipv4", 00:44:17.022 "trsvcid": "4420", 00:44:17.022 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:17.022 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:17.022 "hdgst": true, 00:44:17.022 "ddgst": true 00:44:17.022 }, 00:44:17.022 "method": "bdev_nvme_attach_controller" 00:44:17.022 }' 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # break 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:17.022 21:33:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:17.281 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:44:17.281 ... 00:44:17.281 fio-3.35 00:44:17.281 Starting 3 threads 00:44:29.484 00:44:29.484 filename0: (groupid=0, jobs=1): err= 0: pid=3244253: Tue Nov 19 21:34:02 2024 00:44:29.484 read: IOPS=169, BW=21.2MiB/s (22.3MB/s)(213MiB/10047msec) 00:44:29.484 slat (nsec): min=12014, max=73550, avg=22337.75, stdev=5760.24 00:44:29.484 clat (usec): min=11988, max=61409, avg=17615.40, stdev=2384.39 00:44:29.484 lat (usec): min=12034, max=61428, avg=17637.73, stdev=2384.23 00:44:29.484 clat percentiles (usec): 00:44:29.484 | 1.00th=[14746], 5.00th=[15795], 10.00th=[16188], 20.00th=[16712], 00:44:29.484 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:44:29.484 | 70.00th=[17957], 80.00th=[18220], 90.00th=[18744], 95.00th=[19268], 00:44:29.484 | 99.00th=[20579], 99.50th=[21365], 99.90th=[59507], 99.95th=[61604], 00:44:29.484 | 99.99th=[61604] 00:44:29.484 bw ( KiB/s): min=19968, max=22528, per=33.30%, avg=21811.20, stdev=579.02, samples=20 00:44:29.484 iops : min= 156, max= 176, avg=170.40, stdev= 4.52, samples=20 00:44:29.484 lat (msec) : 20=97.71%, 50=2.05%, 100=0.23% 00:44:29.484 cpu : usr=95.04%, sys=4.39%, ctx=15, majf=0, minf=1636 00:44:29.484 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:29.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:29.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:29.484 issued rwts: total=1706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:29.484 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:29.484 filename0: (groupid=0, jobs=1): err= 0: pid=3244254: Tue Nov 19 21:34:02 2024 00:44:29.484 read: IOPS=164, BW=20.5MiB/s (21.5MB/s)(206MiB/10043msec) 00:44:29.484 slat (nsec): min=11810, max=48666, avg=21497.39, stdev=4336.17 00:44:29.484 clat (usec): min=11569, max=61583, avg=18209.59, stdev=2451.92 00:44:29.484 lat (usec): min=11594, max=61621, avg=18231.08, stdev=2451.89 00:44:29.484 clat percentiles (usec): 00:44:29.484 | 1.00th=[15139], 5.00th=[16450], 10.00th=[16909], 20.00th=[17433], 00:44:29.484 | 30.00th=[17695], 40.00th=[17957], 50.00th=[17957], 60.00th=[18220], 00:44:29.484 | 70.00th=[18482], 80.00th=[19006], 90.00th=[19530], 95.00th=[20055], 00:44:29.484 | 99.00th=[21103], 99.50th=[21627], 99.90th=[61604], 99.95th=[61604], 00:44:29.484 | 99.99th=[61604] 00:44:29.484 bw ( KiB/s): min=20480, max=21760, per=32.21%, avg=21094.40, stdev=418.60, samples=20 00:44:29.484 iops : min= 160, max= 170, avg=164.80, stdev= 3.27, samples=20 00:44:29.484 lat (msec) : 20=95.76%, 50=4.00%, 100=0.24% 00:44:29.484 cpu : usr=95.14%, sys=4.28%, ctx=12, majf=0, minf=1635 00:44:29.484 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:29.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:29.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:29.484 issued rwts: total=1650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:29.484 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:29.484 filename0: (groupid=0, jobs=1): err= 0: pid=3244255: Tue Nov 19 21:34:02 2024 00:44:29.484 read: IOPS=177, BW=22.2MiB/s (23.3MB/s)(223MiB/10048msec) 00:44:29.484 slat (nsec): min=5974, max=71306, avg=29509.36, stdev=7002.45 00:44:29.484 clat (usec): min=9868, max=54196, avg=16827.76, stdev=1626.89 00:44:29.484 lat (usec): min=9900, max=54224, avg=16857.27, stdev=1626.73 00:44:29.484 clat percentiles (usec): 00:44:29.484 | 1.00th=[12649], 5.00th=[15008], 10.00th=[15533], 20.00th=[16057], 00:44:29.484 | 30.00th=[16319], 40.00th=[16712], 50.00th=[16909], 60.00th=[17171], 00:44:29.484 | 70.00th=[17433], 80.00th=[17695], 90.00th=[17957], 95.00th=[18482], 00:44:29.484 | 99.00th=[19006], 99.50th=[19792], 99.90th=[50594], 99.95th=[54264], 00:44:29.484 | 99.99th=[54264] 00:44:29.484 bw ( KiB/s): min=22272, max=23552, per=34.85%, avg=22822.40, stdev=373.99, samples=20 00:44:29.484 iops : min= 174, max= 184, avg=178.30, stdev= 2.92, samples=20 00:44:29.484 lat (msec) : 10=0.06%, 20=99.66%, 50=0.17%, 100=0.11% 00:44:29.484 cpu : usr=88.14%, sys=7.38%, ctx=501, majf=0, minf=1634 00:44:29.484 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:29.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:29.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:29.485 issued rwts: total=1785,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:29.485 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:29.485 00:44:29.485 Run status group 0 (all jobs): 00:44:29.485 READ: bw=64.0MiB/s (67.1MB/s), 20.5MiB/s-22.2MiB/s (21.5MB/s-23.3MB/s), io=643MiB (674MB), run=10043-10048msec 00:44:29.485 ----------------------------------------------------- 00:44:29.485 Suppressions used: 00:44:29.485 count bytes template 00:44:29.485 5 44 /usr/src/fio/parse.c 00:44:29.485 1 8 libtcmalloc_minimal.so 00:44:29.485 1 904 libcrypto.so 00:44:29.485 ----------------------------------------------------- 00:44:29.485 00:44:29.485 21:34:03 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:44:29.485 21:34:03 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:44:29.485 21:34:03 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:44:29.485 21:34:03 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:29.485 21:34:03 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:44:29.485 21:34:03 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:29.485 21:34:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:29.485 21:34:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:29.485 21:34:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:29.485 21:34:03 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:29.485 21:34:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:29.485 21:34:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:29.485 21:34:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:29.485 00:44:29.485 real 0m12.401s 00:44:29.485 user 0m30.198s 00:44:29.485 sys 0m2.135s 00:44:29.485 21:34:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:29.485 21:34:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:29.485 ************************************ 00:44:29.485 END TEST fio_dif_digest 00:44:29.485 ************************************ 00:44:29.485 21:34:03 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:44:29.485 21:34:03 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:44:29.485 21:34:03 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:29.485 21:34:03 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:44:29.485 21:34:03 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:29.485 21:34:03 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:44:29.485 21:34:03 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:29.485 21:34:03 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:29.485 rmmod nvme_tcp 00:44:29.485 rmmod nvme_fabrics 00:44:29.485 rmmod nvme_keyring 00:44:29.485 21:34:03 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:29.485 21:34:03 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:44:29.485 21:34:03 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:44:29.485 21:34:03 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3236735 ']' 00:44:29.485 21:34:03 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3236735 00:44:29.485 21:34:03 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3236735 ']' 00:44:29.485 21:34:03 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3236735 00:44:29.485 21:34:03 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:44:29.485 21:34:03 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:29.485 21:34:03 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3236735 00:44:29.485 21:34:03 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:29.485 21:34:03 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:29.485 21:34:03 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3236735' 00:44:29.485 killing process with pid 3236735 00:44:29.485 21:34:03 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3236735 00:44:29.485 21:34:03 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3236735 00:44:30.860 21:34:04 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:44:30.860 21:34:04 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:31.796 Waiting for block devices as requested 00:44:31.796 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:44:32.082 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:32.082 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:32.082 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:32.082 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:32.374 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:32.374 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:32.374 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:32.374 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:32.658 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:32.658 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:32.659 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:32.659 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:32.659 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:32.918 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:32.918 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:32.918 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:33.177 21:34:06 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:33.177 21:34:06 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:33.177 21:34:06 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:44:33.177 21:34:06 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:33.177 21:34:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:44:33.177 21:34:06 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:44:33.177 21:34:06 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:33.177 21:34:06 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:33.177 21:34:06 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:33.177 21:34:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:33.177 21:34:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:35.079 21:34:08 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:35.079 00:44:35.079 real 1m16.450s 00:44:35.079 user 6m48.424s 00:44:35.079 sys 0m18.074s 00:44:35.079 21:34:08 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:35.079 21:34:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:35.079 ************************************ 00:44:35.079 END TEST nvmf_dif 00:44:35.079 ************************************ 00:44:35.079 21:34:08 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:35.079 21:34:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:35.079 21:34:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:35.079 21:34:08 -- common/autotest_common.sh@10 -- # set +x 00:44:35.079 ************************************ 00:44:35.079 START TEST nvmf_abort_qd_sizes 00:44:35.079 ************************************ 00:44:35.079 21:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:35.338 * Looking for test storage... 00:44:35.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:35.338 21:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:44:35.339 21:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:35.339 21:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:44:35.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.339 --rc genhtml_branch_coverage=1 00:44:35.339 --rc genhtml_function_coverage=1 00:44:35.339 --rc genhtml_legend=1 00:44:35.339 --rc geninfo_all_blocks=1 00:44:35.339 --rc geninfo_unexecuted_blocks=1 00:44:35.339 00:44:35.339 ' 00:44:35.339 21:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:44:35.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.339 --rc genhtml_branch_coverage=1 00:44:35.339 --rc genhtml_function_coverage=1 00:44:35.339 --rc genhtml_legend=1 00:44:35.339 --rc geninfo_all_blocks=1 00:44:35.339 --rc geninfo_unexecuted_blocks=1 00:44:35.339 00:44:35.339 ' 00:44:35.339 21:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:44:35.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.339 --rc genhtml_branch_coverage=1 00:44:35.339 --rc genhtml_function_coverage=1 00:44:35.339 --rc genhtml_legend=1 00:44:35.339 --rc geninfo_all_blocks=1 00:44:35.339 --rc geninfo_unexecuted_blocks=1 00:44:35.339 00:44:35.339 ' 00:44:35.339 21:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:44:35.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.339 --rc genhtml_branch_coverage=1 00:44:35.339 --rc genhtml_function_coverage=1 00:44:35.339 --rc genhtml_legend=1 00:44:35.339 --rc geninfo_all_blocks=1 00:44:35.339 --rc geninfo_unexecuted_blocks=1 00:44:35.339 00:44:35.339 ' 00:44:35.339 21:34:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:35.339 21:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:35.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:44:35.339 21:34:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:44:37.252 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:44:37.252 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:44:37.252 Found net devices under 0000:0a:00.0: cvl_0_0 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:44:37.252 Found net devices under 0000:0a:00.1: cvl_0_1 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:37.252 21:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:37.252 21:34:11 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:37.252 21:34:11 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:37.252 21:34:11 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:37.252 21:34:11 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:37.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:37.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:44:37.252 00:44:37.252 --- 10.0.0.2 ping statistics --- 00:44:37.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:37.252 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:44:37.252 21:34:11 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:37.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:37.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:44:37.252 00:44:37.252 --- 10.0.0.1 ping statistics --- 00:44:37.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:37.252 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:44:37.252 21:34:11 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:37.252 21:34:11 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:44:37.252 21:34:11 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:44:37.253 21:34:11 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:38.627 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:38.627 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:38.627 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:38.627 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:38.627 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:38.627 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:38.627 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:38.627 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:38.627 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:38.627 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:38.627 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:38.627 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:38.627 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:38.627 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:38.627 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:38.627 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:39.560 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:44:39.818 21:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:39.818 21:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:39.818 21:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:39.818 21:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:39.818 21:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:39.818 21:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:39.818 21:34:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:44:39.818 21:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:39.818 21:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:39.818 21:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:39.818 21:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3249311 00:44:39.818 21:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:44:39.818 21:34:13 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3249311 00:44:39.818 21:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3249311 ']' 00:44:39.818 21:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:39.818 21:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:39.818 21:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:39.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:39.818 21:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:39.818 21:34:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:39.818 [2024-11-19 21:34:13.547168] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:44:39.818 [2024-11-19 21:34:13.547330] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:40.077 [2024-11-19 21:34:13.699896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:40.077 [2024-11-19 21:34:13.840230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:40.077 [2024-11-19 21:34:13.840308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:40.077 [2024-11-19 21:34:13.840335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:40.077 [2024-11-19 21:34:13.840359] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:40.077 [2024-11-19 21:34:13.840379] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:40.077 [2024-11-19 21:34:13.843142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:40.077 [2024-11-19 21:34:13.843200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:40.077 [2024-11-19 21:34:13.843240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:40.077 [2024-11-19 21:34:13.843245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:41.011 21:34:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:41.011 ************************************ 00:44:41.011 START TEST spdk_target_abort 00:44:41.011 ************************************ 00:44:41.011 21:34:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:44:41.011 21:34:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:44:41.011 21:34:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:44:41.011 21:34:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:41.011 21:34:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:44.290 spdk_targetn1 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:44.290 [2024-11-19 21:34:17.457897] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:44.290 [2024-11-19 21:34:17.504063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:44.290 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:44.291 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:44.291 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:44.291 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:44:44.291 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:44.291 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:44:44.291 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:44.291 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:44.291 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:44.291 21:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:47.571 Initializing NVMe Controllers 00:44:47.571 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:47.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:47.571 Initialization complete. Launching workers. 00:44:47.571 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9712, failed: 0 00:44:47.571 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1215, failed to submit 8497 00:44:47.571 success 710, unsuccessful 505, failed 0 00:44:47.571 21:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:47.571 21:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:50.849 Initializing NVMe Controllers 00:44:50.849 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:50.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:50.849 Initialization complete. Launching workers. 00:44:50.849 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8532, failed: 0 00:44:50.849 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1250, failed to submit 7282 00:44:50.849 success 302, unsuccessful 948, failed 0 00:44:50.849 21:34:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:50.849 21:34:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:54.131 Initializing NVMe Controllers 00:44:54.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:54.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:54.131 Initialization complete. Launching workers. 00:44:54.131 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 26640, failed: 0 00:44:54.131 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2635, failed to submit 24005 00:44:54.131 success 171, unsuccessful 2464, failed 0 00:44:54.131 21:34:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:44:54.131 21:34:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.131 21:34:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:54.131 21:34:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:54.131 21:34:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:44:54.131 21:34:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:54.131 21:34:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:55.503 21:34:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:55.503 21:34:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3249311 00:44:55.503 21:34:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3249311 ']' 00:44:55.503 21:34:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3249311 00:44:55.503 21:34:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:44:55.503 21:34:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:55.503 21:34:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3249311 00:44:55.503 21:34:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:55.503 21:34:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:55.503 21:34:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3249311' 00:44:55.503 killing process with pid 3249311 00:44:55.503 21:34:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3249311 00:44:55.503 21:34:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3249311 00:44:56.437 00:44:56.437 real 0m15.341s 00:44:56.437 user 0m59.758s 00:44:56.437 sys 0m2.905s 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:56.437 ************************************ 00:44:56.437 END TEST spdk_target_abort 00:44:56.437 ************************************ 00:44:56.437 21:34:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:44:56.437 21:34:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:56.437 21:34:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:56.437 21:34:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:56.437 ************************************ 00:44:56.437 START TEST kernel_target_abort 00:44:56.437 ************************************ 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:44:56.437 21:34:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:57.372 Waiting for block devices as requested 00:44:57.372 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:44:57.630 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:57.630 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:57.630 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:57.889 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:57.889 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:57.889 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:57.889 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:58.147 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:58.147 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:58.147 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:58.147 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:58.406 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:58.406 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:58.406 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:58.406 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:58.665 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:44:58.924 No valid GPT data, bailing 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:44:58.924 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:44:59.183 00:44:59.183 Discovery Log Number of Records 2, Generation counter 2 00:44:59.183 =====Discovery Log Entry 0====== 00:44:59.183 trtype: tcp 00:44:59.183 adrfam: ipv4 00:44:59.183 subtype: current discovery subsystem 00:44:59.183 treq: not specified, sq flow control disable supported 00:44:59.183 portid: 1 00:44:59.183 trsvcid: 4420 00:44:59.183 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:44:59.183 traddr: 10.0.0.1 00:44:59.183 eflags: none 00:44:59.183 sectype: none 00:44:59.183 =====Discovery Log Entry 1====== 00:44:59.183 trtype: tcp 00:44:59.183 adrfam: ipv4 00:44:59.183 subtype: nvme subsystem 00:44:59.183 treq: not specified, sq flow control disable supported 00:44:59.183 portid: 1 00:44:59.183 trsvcid: 4420 00:44:59.183 subnqn: nqn.2016-06.io.spdk:testnqn 00:44:59.183 traddr: 10.0.0.1 00:44:59.183 eflags: none 00:44:59.183 sectype: none 00:44:59.183 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:44:59.183 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:59.183 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:59.183 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:44:59.183 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:59.183 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:59.183 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:59.183 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:59.183 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:59.183 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:59.183 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:59.183 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:59.183 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:59.183 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:59.183 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:44:59.183 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:59.183 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:44:59.183 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:59.183 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:59.183 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:59.183 21:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:02.466 Initializing NVMe Controllers 00:45:02.466 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:02.466 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:02.466 Initialization complete. Launching workers. 00:45:02.466 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36687, failed: 0 00:45:02.466 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36687, failed to submit 0 00:45:02.466 success 0, unsuccessful 36687, failed 0 00:45:02.466 21:34:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:02.466 21:34:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:05.749 Initializing NVMe Controllers 00:45:05.749 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:05.749 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:05.749 Initialization complete. Launching workers. 00:45:05.749 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72702, failed: 0 00:45:05.749 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18342, failed to submit 54360 00:45:05.749 success 0, unsuccessful 18342, failed 0 00:45:05.749 21:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:05.749 21:34:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:09.086 Initializing NVMe Controllers 00:45:09.086 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:09.086 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:09.086 Initialization complete. Launching workers. 00:45:09.086 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65928, failed: 0 00:45:09.086 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16470, failed to submit 49458 00:45:09.086 success 0, unsuccessful 16470, failed 0 00:45:09.086 21:34:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:45:09.086 21:34:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:45:09.086 21:34:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:45:09.086 21:34:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:09.086 21:34:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:09.086 21:34:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:45:09.086 21:34:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:09.086 21:34:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:45:09.086 21:34:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:45:09.086 21:34:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:10.023 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:10.023 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:10.023 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:10.023 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:10.023 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:10.023 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:10.023 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:10.023 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:10.023 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:10.023 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:10.023 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:10.023 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:10.023 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:10.023 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:10.023 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:10.023 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:10.958 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:45:10.958 00:45:10.958 real 0m14.737s 00:45:10.958 user 0m7.244s 00:45:10.958 sys 0m3.324s 00:45:10.958 21:34:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:10.958 21:34:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:10.958 ************************************ 00:45:10.958 END TEST kernel_target_abort 00:45:10.958 ************************************ 00:45:10.958 21:34:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:45:10.958 21:34:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:45:10.958 21:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:10.958 21:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:45:10.958 21:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:10.958 21:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:45:10.958 21:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:10.958 21:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:10.958 rmmod nvme_tcp 00:45:10.958 rmmod nvme_fabrics 00:45:10.958 rmmod nvme_keyring 00:45:11.216 21:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:11.216 21:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:45:11.216 21:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:45:11.216 21:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3249311 ']' 00:45:11.216 21:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3249311 00:45:11.216 21:34:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3249311 ']' 00:45:11.216 21:34:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3249311 00:45:11.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3249311) - No such process 00:45:11.216 21:34:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3249311 is not found' 00:45:11.216 Process with pid 3249311 is not found 00:45:11.216 21:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:45:11.216 21:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:12.152 Waiting for block devices as requested 00:45:12.152 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:45:12.411 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:12.411 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:12.411 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:12.670 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:12.670 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:12.670 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:12.670 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:12.929 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:12.929 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:12.929 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:12.929 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:13.188 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:13.188 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:13.188 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:13.188 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:13.448 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:13.448 21:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:13.448 21:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:13.448 21:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:45:13.448 21:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:45:13.448 21:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:13.448 21:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:45:13.448 21:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:13.448 21:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:13.448 21:34:47 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:13.448 21:34:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:13.448 21:34:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:15.982 21:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:15.982 00:45:15.982 real 0m40.296s 00:45:15.982 user 1m9.453s 00:45:15.982 sys 0m9.669s 00:45:15.982 21:34:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:15.982 21:34:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:15.982 ************************************ 00:45:15.982 END TEST nvmf_abort_qd_sizes 00:45:15.982 ************************************ 00:45:15.982 21:34:49 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:15.982 21:34:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:15.982 21:34:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:15.982 21:34:49 -- common/autotest_common.sh@10 -- # set +x 00:45:15.982 ************************************ 00:45:15.982 START TEST keyring_file 00:45:15.982 ************************************ 00:45:15.982 21:34:49 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:15.982 * Looking for test storage... 00:45:15.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:15.982 21:34:49 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:15.982 21:34:49 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:45:15.982 21:34:49 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:15.982 21:34:49 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@345 -- # : 1 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@353 -- # local d=1 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@355 -- # echo 1 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@353 -- # local d=2 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@355 -- # echo 2 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:15.982 21:34:49 keyring_file -- scripts/common.sh@368 -- # return 0 00:45:15.982 21:34:49 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:15.982 21:34:49 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:15.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:15.982 --rc genhtml_branch_coverage=1 00:45:15.982 --rc genhtml_function_coverage=1 00:45:15.982 --rc genhtml_legend=1 00:45:15.982 --rc geninfo_all_blocks=1 00:45:15.982 --rc geninfo_unexecuted_blocks=1 00:45:15.982 00:45:15.982 ' 00:45:15.982 21:34:49 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:15.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:15.982 --rc genhtml_branch_coverage=1 00:45:15.982 --rc genhtml_function_coverage=1 00:45:15.982 --rc genhtml_legend=1 00:45:15.982 --rc geninfo_all_blocks=1 00:45:15.982 --rc geninfo_unexecuted_blocks=1 00:45:15.982 00:45:15.982 ' 00:45:15.982 21:34:49 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:15.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:15.982 --rc genhtml_branch_coverage=1 00:45:15.982 --rc genhtml_function_coverage=1 00:45:15.982 --rc genhtml_legend=1 00:45:15.983 --rc geninfo_all_blocks=1 00:45:15.983 --rc geninfo_unexecuted_blocks=1 00:45:15.983 00:45:15.983 ' 00:45:15.983 21:34:49 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:15.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:15.983 --rc genhtml_branch_coverage=1 00:45:15.983 --rc genhtml_function_coverage=1 00:45:15.983 --rc genhtml_legend=1 00:45:15.983 --rc geninfo_all_blocks=1 00:45:15.983 --rc geninfo_unexecuted_blocks=1 00:45:15.983 00:45:15.983 ' 00:45:15.983 21:34:49 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:15.983 21:34:49 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:15.983 21:34:49 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:45:15.983 21:34:49 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:15.983 21:34:49 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:15.983 21:34:49 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:15.983 21:34:49 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:15.983 21:34:49 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:15.983 21:34:49 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:15.983 21:34:49 keyring_file -- paths/export.sh@5 -- # export PATH 00:45:15.983 21:34:49 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@51 -- # : 0 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:15.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:15.983 21:34:49 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:15.983 21:34:49 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:15.983 21:34:49 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:15.983 21:34:49 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:45:15.983 21:34:49 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:45:15.983 21:34:49 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:45:15.983 21:34:49 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:15.983 21:34:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:15.983 21:34:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:15.983 21:34:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:15.983 21:34:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:15.983 21:34:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:15.983 21:34:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LjeCLRtiS0 00:45:15.983 21:34:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@733 -- # python - 00:45:15.983 21:34:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LjeCLRtiS0 00:45:15.983 21:34:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LjeCLRtiS0 00:45:15.983 21:34:49 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.LjeCLRtiS0 00:45:15.983 21:34:49 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:45:15.983 21:34:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:15.983 21:34:49 keyring_file -- keyring/common.sh@17 -- # name=key1 00:45:15.983 21:34:49 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:15.983 21:34:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:15.983 21:34:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:15.983 21:34:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sl8jwME5Vp 00:45:15.983 21:34:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:45:15.983 21:34:49 keyring_file -- nvmf/common.sh@733 -- # python - 00:45:15.983 21:34:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sl8jwME5Vp 00:45:15.983 21:34:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sl8jwME5Vp 00:45:15.983 21:34:49 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.sl8jwME5Vp 00:45:15.983 21:34:49 keyring_file -- keyring/file.sh@30 -- # tgtpid=3255536 00:45:15.983 21:34:49 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:15.983 21:34:49 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3255536 00:45:15.983 21:34:49 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3255536 ']' 00:45:15.983 21:34:49 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:15.983 21:34:49 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:15.983 21:34:49 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:15.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:15.983 21:34:49 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:15.983 21:34:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:15.983 [2024-11-19 21:34:49.544385] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:45:15.983 [2024-11-19 21:34:49.544541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255536 ] 00:45:15.983 [2024-11-19 21:34:49.692438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:16.242 [2024-11-19 21:34:49.829170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:45:17.176 21:34:50 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:17.176 [2024-11-19 21:34:50.810253] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:17.176 null0 00:45:17.176 [2024-11-19 21:34:50.842271] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:17.176 [2024-11-19 21:34:50.842925] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.176 21:34:50 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:17.176 [2024-11-19 21:34:50.866317] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:45:17.176 request: 00:45:17.176 { 00:45:17.176 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:45:17.176 "secure_channel": false, 00:45:17.176 "listen_address": { 00:45:17.176 "trtype": "tcp", 00:45:17.176 "traddr": "127.0.0.1", 00:45:17.176 "trsvcid": "4420" 00:45:17.176 }, 00:45:17.176 "method": "nvmf_subsystem_add_listener", 00:45:17.176 "req_id": 1 00:45:17.176 } 00:45:17.176 Got JSON-RPC error response 00:45:17.176 response: 00:45:17.176 { 00:45:17.176 "code": -32602, 00:45:17.176 "message": "Invalid parameters" 00:45:17.176 } 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:17.176 21:34:50 keyring_file -- keyring/file.sh@47 -- # bperfpid=3255680 00:45:17.176 21:34:50 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:45:17.176 21:34:50 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3255680 /var/tmp/bperf.sock 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3255680 ']' 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:17.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:17.176 21:34:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:17.176 [2024-11-19 21:34:50.952220] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:45:17.176 [2024-11-19 21:34:50.952363] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255680 ] 00:45:17.435 [2024-11-19 21:34:51.095396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:17.693 [2024-11-19 21:34:51.232214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:18.259 21:34:51 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:18.259 21:34:51 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:45:18.259 21:34:51 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LjeCLRtiS0 00:45:18.259 21:34:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LjeCLRtiS0 00:45:18.516 21:34:52 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.sl8jwME5Vp 00:45:18.516 21:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.sl8jwME5Vp 00:45:18.775 21:34:52 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:45:18.775 21:34:52 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:45:18.775 21:34:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:18.775 21:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:18.775 21:34:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:19.033 21:34:52 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.LjeCLRtiS0 == \/\t\m\p\/\t\m\p\.\L\j\e\C\L\R\t\i\S\0 ]] 00:45:19.033 21:34:52 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:45:19.033 21:34:52 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:45:19.033 21:34:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:19.033 21:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:19.033 21:34:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:19.292 21:34:52 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.sl8jwME5Vp == \/\t\m\p\/\t\m\p\.\s\l\8\j\w\M\E\5\V\p ]] 00:45:19.292 21:34:52 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:45:19.292 21:34:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:19.292 21:34:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:19.292 21:34:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:19.292 21:34:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:19.292 21:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:19.550 21:34:53 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:45:19.550 21:34:53 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:45:19.550 21:34:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:19.550 21:34:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:19.550 21:34:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:19.550 21:34:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:19.550 21:34:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:19.808 21:34:53 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:45:19.808 21:34:53 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:19.808 21:34:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:20.067 [2024-11-19 21:34:53.755305] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:20.067 nvme0n1 00:45:20.067 21:34:53 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:45:20.067 21:34:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:20.067 21:34:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:20.067 21:34:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:20.067 21:34:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:20.067 21:34:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:20.632 21:34:54 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:45:20.632 21:34:54 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:45:20.632 21:34:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:20.632 21:34:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:20.632 21:34:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:20.632 21:34:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:20.632 21:34:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:20.632 21:34:54 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:45:20.632 21:34:54 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:20.890 Running I/O for 1 seconds... 00:45:21.824 6452.00 IOPS, 25.20 MiB/s 00:45:21.824 Latency(us) 00:45:21.824 [2024-11-19T20:34:55.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:21.824 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:45:21.824 nvme0n1 : 1.01 6506.53 25.42 0.00 0.00 19584.42 7233.23 30098.01 00:45:21.824 [2024-11-19T20:34:55.619Z] =================================================================================================================== 00:45:21.824 [2024-11-19T20:34:55.619Z] Total : 6506.53 25.42 0.00 0.00 19584.42 7233.23 30098.01 00:45:21.824 { 00:45:21.824 "results": [ 00:45:21.824 { 00:45:21.824 "job": "nvme0n1", 00:45:21.824 "core_mask": "0x2", 00:45:21.824 "workload": "randrw", 00:45:21.824 "percentage": 50, 00:45:21.824 "status": "finished", 00:45:21.824 "queue_depth": 128, 00:45:21.824 "io_size": 4096, 00:45:21.824 "runtime": 1.011599, 00:45:21.824 "iops": 6506.530749832691, 00:45:21.824 "mibps": 25.41613574153395, 00:45:21.824 "io_failed": 0, 00:45:21.824 "io_timeout": 0, 00:45:21.824 "avg_latency_us": 19584.41619410964, 00:45:21.824 "min_latency_us": 7233.2325925925925, 00:45:21.824 "max_latency_us": 30098.014814814815 00:45:21.824 } 00:45:21.824 ], 00:45:21.824 "core_count": 1 00:45:21.824 } 00:45:21.824 21:34:55 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:21.824 21:34:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:22.082 21:34:55 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:45:22.082 21:34:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:22.082 21:34:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:22.082 21:34:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:22.082 21:34:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:22.082 21:34:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:22.340 21:34:56 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:45:22.340 21:34:56 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:45:22.340 21:34:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:22.340 21:34:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:22.340 21:34:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:22.340 21:34:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:22.340 21:34:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:22.598 21:34:56 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:45:22.598 21:34:56 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:22.598 21:34:56 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:22.598 21:34:56 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:22.598 21:34:56 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:22.598 21:34:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:22.598 21:34:56 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:22.598 21:34:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:22.598 21:34:56 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:22.598 21:34:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:22.856 [2024-11-19 21:34:56.628214] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:22.856 [2024-11-19 21:34:56.628388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (107): Transport endpoint is not connected 00:45:22.856 [2024-11-19 21:34:56.629362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (9): Bad file descriptor 00:45:22.856 [2024-11-19 21:34:56.630363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:45:22.856 [2024-11-19 21:34:56.630390] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:22.856 [2024-11-19 21:34:56.630426] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:22.856 [2024-11-19 21:34:56.630452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:45:22.856 request: 00:45:22.856 { 00:45:22.856 "name": "nvme0", 00:45:22.856 "trtype": "tcp", 00:45:22.856 "traddr": "127.0.0.1", 00:45:22.856 "adrfam": "ipv4", 00:45:22.856 "trsvcid": "4420", 00:45:22.856 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:22.856 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:22.856 "prchk_reftag": false, 00:45:22.856 "prchk_guard": false, 00:45:22.856 "hdgst": false, 00:45:22.856 "ddgst": false, 00:45:22.856 "psk": "key1", 00:45:22.856 "allow_unrecognized_csi": false, 00:45:22.856 "method": "bdev_nvme_attach_controller", 00:45:22.856 "req_id": 1 00:45:22.856 } 00:45:22.856 Got JSON-RPC error response 00:45:22.856 response: 00:45:22.856 { 00:45:22.856 "code": -5, 00:45:22.856 "message": "Input/output error" 00:45:22.856 } 00:45:22.856 21:34:56 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:22.856 21:34:56 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:22.856 21:34:56 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:22.856 21:34:56 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:22.856 21:34:56 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:45:23.115 21:34:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:23.115 21:34:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:23.115 21:34:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:23.115 21:34:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:23.115 21:34:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:23.373 21:34:56 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:45:23.373 21:34:56 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:45:23.373 21:34:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:23.373 21:34:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:23.373 21:34:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:23.373 21:34:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:23.373 21:34:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:23.630 21:34:57 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:45:23.631 21:34:57 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:45:23.631 21:34:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:23.888 21:34:57 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:45:23.888 21:34:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:45:24.146 21:34:57 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:45:24.146 21:34:57 keyring_file -- keyring/file.sh@78 -- # jq length 00:45:24.146 21:34:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:24.404 21:34:58 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:45:24.404 21:34:58 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.LjeCLRtiS0 00:45:24.404 21:34:58 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.LjeCLRtiS0 00:45:24.404 21:34:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:24.404 21:34:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.LjeCLRtiS0 00:45:24.404 21:34:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:24.404 21:34:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:24.404 21:34:58 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:24.404 21:34:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:24.404 21:34:58 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LjeCLRtiS0 00:45:24.404 21:34:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LjeCLRtiS0 00:45:24.663 [2024-11-19 21:34:58.277543] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.LjeCLRtiS0': 0100660 00:45:24.663 [2024-11-19 21:34:58.277606] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:45:24.663 request: 00:45:24.663 { 00:45:24.663 "name": "key0", 00:45:24.663 "path": "/tmp/tmp.LjeCLRtiS0", 00:45:24.663 "method": "keyring_file_add_key", 00:45:24.663 "req_id": 1 00:45:24.663 } 00:45:24.663 Got JSON-RPC error response 00:45:24.663 response: 00:45:24.663 { 00:45:24.663 "code": -1, 00:45:24.663 "message": "Operation not permitted" 00:45:24.663 } 00:45:24.663 21:34:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:24.663 21:34:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:24.663 21:34:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:24.663 21:34:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:24.663 21:34:58 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.LjeCLRtiS0 00:45:24.663 21:34:58 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LjeCLRtiS0 00:45:24.663 21:34:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LjeCLRtiS0 00:45:24.921 21:34:58 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.LjeCLRtiS0 00:45:24.922 21:34:58 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:45:24.922 21:34:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:24.922 21:34:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:24.922 21:34:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:24.922 21:34:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:24.922 21:34:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:25.180 21:34:58 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:45:25.180 21:34:58 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:25.180 21:34:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:25.180 21:34:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:25.180 21:34:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:25.180 21:34:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:25.180 21:34:58 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:25.180 21:34:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:25.180 21:34:58 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:25.180 21:34:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:25.437 [2024-11-19 21:34:59.156053] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.LjeCLRtiS0': No such file or directory 00:45:25.437 [2024-11-19 21:34:59.156129] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:45:25.437 [2024-11-19 21:34:59.156162] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:45:25.437 [2024-11-19 21:34:59.156199] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:45:25.437 [2024-11-19 21:34:59.156218] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:45:25.437 [2024-11-19 21:34:59.156236] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:45:25.437 request: 00:45:25.437 { 00:45:25.437 "name": "nvme0", 00:45:25.437 "trtype": "tcp", 00:45:25.437 "traddr": "127.0.0.1", 00:45:25.437 "adrfam": "ipv4", 00:45:25.437 "trsvcid": "4420", 00:45:25.437 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:25.437 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:25.437 "prchk_reftag": false, 00:45:25.437 "prchk_guard": false, 00:45:25.437 "hdgst": false, 00:45:25.437 "ddgst": false, 00:45:25.437 "psk": "key0", 00:45:25.437 "allow_unrecognized_csi": false, 00:45:25.437 "method": "bdev_nvme_attach_controller", 00:45:25.437 "req_id": 1 00:45:25.437 } 00:45:25.437 Got JSON-RPC error response 00:45:25.437 response: 00:45:25.437 { 00:45:25.437 "code": -19, 00:45:25.437 "message": "No such device" 00:45:25.437 } 00:45:25.437 21:34:59 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:25.437 21:34:59 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:25.437 21:34:59 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:25.437 21:34:59 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:25.437 21:34:59 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:45:25.437 21:34:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:25.695 21:34:59 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:25.695 21:34:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:25.695 21:34:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:25.695 21:34:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:25.695 21:34:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:25.695 21:34:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:25.695 21:34:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QQzcD9WSUh 00:45:25.695 21:34:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:25.695 21:34:59 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:25.695 21:34:59 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:45:25.695 21:34:59 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:25.695 21:34:59 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:25.695 21:34:59 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:45:25.695 21:34:59 keyring_file -- nvmf/common.sh@733 -- # python - 00:45:25.953 21:34:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QQzcD9WSUh 00:45:25.953 21:34:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QQzcD9WSUh 00:45:25.953 21:34:59 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.QQzcD9WSUh 00:45:25.953 21:34:59 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QQzcD9WSUh 00:45:25.953 21:34:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QQzcD9WSUh 00:45:26.211 21:34:59 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:26.211 21:34:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:26.469 nvme0n1 00:45:26.469 21:35:00 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:45:26.469 21:35:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:26.469 21:35:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:26.469 21:35:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:26.469 21:35:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:26.469 21:35:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:26.728 21:35:00 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:45:26.728 21:35:00 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:45:26.728 21:35:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:26.986 21:35:00 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:45:26.986 21:35:00 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:45:26.986 21:35:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:26.986 21:35:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:26.986 21:35:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:27.244 21:35:00 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:45:27.244 21:35:00 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:45:27.244 21:35:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:27.244 21:35:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:27.244 21:35:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:27.244 21:35:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:27.244 21:35:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:27.503 21:35:01 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:45:27.503 21:35:01 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:27.503 21:35:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:27.760 21:35:01 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:45:27.760 21:35:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:27.760 21:35:01 keyring_file -- keyring/file.sh@105 -- # jq length 00:45:28.018 21:35:01 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:45:28.018 21:35:01 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QQzcD9WSUh 00:45:28.018 21:35:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QQzcD9WSUh 00:45:28.583 21:35:02 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.sl8jwME5Vp 00:45:28.583 21:35:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.sl8jwME5Vp 00:45:28.584 21:35:02 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:28.584 21:35:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:29.149 nvme0n1 00:45:29.149 21:35:02 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:45:29.149 21:35:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:45:29.407 21:35:03 keyring_file -- keyring/file.sh@113 -- # config='{ 00:45:29.407 "subsystems": [ 00:45:29.407 { 00:45:29.407 "subsystem": "keyring", 00:45:29.407 "config": [ 00:45:29.407 { 00:45:29.407 "method": "keyring_file_add_key", 00:45:29.407 "params": { 00:45:29.407 "name": "key0", 00:45:29.407 "path": "/tmp/tmp.QQzcD9WSUh" 00:45:29.407 } 00:45:29.407 }, 00:45:29.407 { 00:45:29.407 "method": "keyring_file_add_key", 00:45:29.407 "params": { 00:45:29.407 "name": "key1", 00:45:29.407 "path": "/tmp/tmp.sl8jwME5Vp" 00:45:29.407 } 00:45:29.407 } 00:45:29.407 ] 00:45:29.407 }, 00:45:29.407 { 00:45:29.407 "subsystem": "iobuf", 00:45:29.407 "config": [ 00:45:29.407 { 00:45:29.407 "method": "iobuf_set_options", 00:45:29.407 "params": { 00:45:29.407 "small_pool_count": 8192, 00:45:29.407 "large_pool_count": 1024, 00:45:29.407 "small_bufsize": 8192, 00:45:29.407 "large_bufsize": 135168, 00:45:29.407 "enable_numa": false 00:45:29.407 } 00:45:29.407 } 00:45:29.407 ] 00:45:29.407 }, 00:45:29.407 { 00:45:29.407 "subsystem": "sock", 00:45:29.407 "config": [ 00:45:29.407 { 00:45:29.407 "method": "sock_set_default_impl", 00:45:29.407 "params": { 00:45:29.407 "impl_name": "posix" 00:45:29.407 } 00:45:29.407 }, 00:45:29.407 { 00:45:29.407 "method": "sock_impl_set_options", 00:45:29.407 "params": { 00:45:29.407 "impl_name": "ssl", 00:45:29.407 "recv_buf_size": 4096, 00:45:29.407 "send_buf_size": 4096, 00:45:29.407 "enable_recv_pipe": true, 00:45:29.407 "enable_quickack": false, 00:45:29.407 "enable_placement_id": 0, 00:45:29.407 "enable_zerocopy_send_server": true, 00:45:29.407 "enable_zerocopy_send_client": false, 00:45:29.407 "zerocopy_threshold": 0, 00:45:29.407 "tls_version": 0, 00:45:29.407 "enable_ktls": false 00:45:29.407 } 00:45:29.407 }, 00:45:29.407 { 00:45:29.407 "method": "sock_impl_set_options", 00:45:29.407 "params": { 00:45:29.407 "impl_name": "posix", 00:45:29.407 "recv_buf_size": 2097152, 00:45:29.407 "send_buf_size": 2097152, 00:45:29.407 "enable_recv_pipe": true, 00:45:29.407 "enable_quickack": false, 00:45:29.407 "enable_placement_id": 0, 00:45:29.407 "enable_zerocopy_send_server": true, 00:45:29.407 "enable_zerocopy_send_client": false, 00:45:29.407 "zerocopy_threshold": 0, 00:45:29.407 "tls_version": 0, 00:45:29.407 "enable_ktls": false 00:45:29.407 } 00:45:29.407 } 00:45:29.407 ] 00:45:29.407 }, 00:45:29.407 { 00:45:29.407 "subsystem": "vmd", 00:45:29.407 "config": [] 00:45:29.407 }, 00:45:29.407 { 00:45:29.407 "subsystem": "accel", 00:45:29.407 "config": [ 00:45:29.407 { 00:45:29.407 "method": "accel_set_options", 00:45:29.407 "params": { 00:45:29.407 "small_cache_size": 128, 00:45:29.407 "large_cache_size": 16, 00:45:29.407 "task_count": 2048, 00:45:29.407 "sequence_count": 2048, 00:45:29.407 "buf_count": 2048 00:45:29.407 } 00:45:29.407 } 00:45:29.407 ] 00:45:29.407 }, 00:45:29.407 { 00:45:29.407 "subsystem": "bdev", 00:45:29.407 "config": [ 00:45:29.407 { 00:45:29.407 "method": "bdev_set_options", 00:45:29.407 "params": { 00:45:29.407 "bdev_io_pool_size": 65535, 00:45:29.407 "bdev_io_cache_size": 256, 00:45:29.407 "bdev_auto_examine": true, 00:45:29.407 "iobuf_small_cache_size": 128, 00:45:29.408 "iobuf_large_cache_size": 16 00:45:29.408 } 00:45:29.408 }, 00:45:29.408 { 00:45:29.408 "method": "bdev_raid_set_options", 00:45:29.408 "params": { 00:45:29.408 "process_window_size_kb": 1024, 00:45:29.408 "process_max_bandwidth_mb_sec": 0 00:45:29.408 } 00:45:29.408 }, 00:45:29.408 { 00:45:29.408 "method": "bdev_iscsi_set_options", 00:45:29.408 "params": { 00:45:29.408 "timeout_sec": 30 00:45:29.408 } 00:45:29.408 }, 00:45:29.408 { 00:45:29.408 "method": "bdev_nvme_set_options", 00:45:29.408 "params": { 00:45:29.408 "action_on_timeout": "none", 00:45:29.408 "timeout_us": 0, 00:45:29.408 "timeout_admin_us": 0, 00:45:29.408 "keep_alive_timeout_ms": 10000, 00:45:29.408 "arbitration_burst": 0, 00:45:29.408 "low_priority_weight": 0, 00:45:29.408 "medium_priority_weight": 0, 00:45:29.408 "high_priority_weight": 0, 00:45:29.408 "nvme_adminq_poll_period_us": 10000, 00:45:29.408 "nvme_ioq_poll_period_us": 0, 00:45:29.408 "io_queue_requests": 512, 00:45:29.408 "delay_cmd_submit": true, 00:45:29.408 "transport_retry_count": 4, 00:45:29.408 "bdev_retry_count": 3, 00:45:29.408 "transport_ack_timeout": 0, 00:45:29.408 "ctrlr_loss_timeout_sec": 0, 00:45:29.408 "reconnect_delay_sec": 0, 00:45:29.408 "fast_io_fail_timeout_sec": 0, 00:45:29.408 "disable_auto_failback": false, 00:45:29.408 "generate_uuids": false, 00:45:29.408 "transport_tos": 0, 00:45:29.408 "nvme_error_stat": false, 00:45:29.408 "rdma_srq_size": 0, 00:45:29.408 "io_path_stat": false, 00:45:29.408 "allow_accel_sequence": false, 00:45:29.408 "rdma_max_cq_size": 0, 00:45:29.408 "rdma_cm_event_timeout_ms": 0, 00:45:29.408 "dhchap_digests": [ 00:45:29.408 "sha256", 00:45:29.408 "sha384", 00:45:29.408 "sha512" 00:45:29.408 ], 00:45:29.408 "dhchap_dhgroups": [ 00:45:29.408 "null", 00:45:29.408 "ffdhe2048", 00:45:29.408 "ffdhe3072", 00:45:29.408 "ffdhe4096", 00:45:29.408 "ffdhe6144", 00:45:29.408 "ffdhe8192" 00:45:29.408 ] 00:45:29.408 } 00:45:29.408 }, 00:45:29.408 { 00:45:29.408 "method": "bdev_nvme_attach_controller", 00:45:29.408 "params": { 00:45:29.408 "name": "nvme0", 00:45:29.408 "trtype": "TCP", 00:45:29.408 "adrfam": "IPv4", 00:45:29.408 "traddr": "127.0.0.1", 00:45:29.408 "trsvcid": "4420", 00:45:29.408 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:29.408 "prchk_reftag": false, 00:45:29.408 "prchk_guard": false, 00:45:29.408 "ctrlr_loss_timeout_sec": 0, 00:45:29.408 "reconnect_delay_sec": 0, 00:45:29.408 "fast_io_fail_timeout_sec": 0, 00:45:29.408 "psk": "key0", 00:45:29.408 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:29.408 "hdgst": false, 00:45:29.408 "ddgst": false, 00:45:29.408 "multipath": "multipath" 00:45:29.408 } 00:45:29.408 }, 00:45:29.408 { 00:45:29.408 "method": "bdev_nvme_set_hotplug", 00:45:29.408 "params": { 00:45:29.408 "period_us": 100000, 00:45:29.408 "enable": false 00:45:29.408 } 00:45:29.408 }, 00:45:29.408 { 00:45:29.408 "method": "bdev_wait_for_examine" 00:45:29.408 } 00:45:29.408 ] 00:45:29.408 }, 00:45:29.408 { 00:45:29.408 "subsystem": "nbd", 00:45:29.408 "config": [] 00:45:29.408 } 00:45:29.408 ] 00:45:29.408 }' 00:45:29.408 21:35:03 keyring_file -- keyring/file.sh@115 -- # killprocess 3255680 00:45:29.408 21:35:03 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3255680 ']' 00:45:29.408 21:35:03 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3255680 00:45:29.408 21:35:03 keyring_file -- common/autotest_common.sh@959 -- # uname 00:45:29.408 21:35:03 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:29.408 21:35:03 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3255680 00:45:29.408 21:35:03 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:29.408 21:35:03 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:29.408 21:35:03 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3255680' 00:45:29.408 killing process with pid 3255680 00:45:29.408 21:35:03 keyring_file -- common/autotest_common.sh@973 -- # kill 3255680 00:45:29.408 Received shutdown signal, test time was about 1.000000 seconds 00:45:29.408 00:45:29.408 Latency(us) 00:45:29.408 [2024-11-19T20:35:03.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:29.408 [2024-11-19T20:35:03.203Z] =================================================================================================================== 00:45:29.408 [2024-11-19T20:35:03.203Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:29.408 21:35:03 keyring_file -- common/autotest_common.sh@978 -- # wait 3255680 00:45:30.341 21:35:03 keyring_file -- keyring/file.sh@118 -- # bperfpid=3257282 00:45:30.341 21:35:03 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3257282 /var/tmp/bperf.sock 00:45:30.341 21:35:03 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3257282 ']' 00:45:30.341 21:35:03 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:45:30.341 21:35:03 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:30.341 21:35:03 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:30.341 21:35:03 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:45:30.341 "subsystems": [ 00:45:30.341 { 00:45:30.341 "subsystem": "keyring", 00:45:30.341 "config": [ 00:45:30.341 { 00:45:30.341 "method": "keyring_file_add_key", 00:45:30.341 "params": { 00:45:30.342 "name": "key0", 00:45:30.342 "path": "/tmp/tmp.QQzcD9WSUh" 00:45:30.342 } 00:45:30.342 }, 00:45:30.342 { 00:45:30.342 "method": "keyring_file_add_key", 00:45:30.342 "params": { 00:45:30.342 "name": "key1", 00:45:30.342 "path": "/tmp/tmp.sl8jwME5Vp" 00:45:30.342 } 00:45:30.342 } 00:45:30.342 ] 00:45:30.342 }, 00:45:30.342 { 00:45:30.342 "subsystem": "iobuf", 00:45:30.342 "config": [ 00:45:30.342 { 00:45:30.342 "method": "iobuf_set_options", 00:45:30.342 "params": { 00:45:30.342 "small_pool_count": 8192, 00:45:30.342 "large_pool_count": 1024, 00:45:30.342 "small_bufsize": 8192, 00:45:30.342 "large_bufsize": 135168, 00:45:30.342 "enable_numa": false 00:45:30.342 } 00:45:30.342 } 00:45:30.342 ] 00:45:30.342 }, 00:45:30.342 { 00:45:30.342 "subsystem": "sock", 00:45:30.342 "config": [ 00:45:30.342 { 00:45:30.342 "method": "sock_set_default_impl", 00:45:30.342 "params": { 00:45:30.342 "impl_name": "posix" 00:45:30.342 } 00:45:30.342 }, 00:45:30.342 { 00:45:30.342 "method": "sock_impl_set_options", 00:45:30.342 "params": { 00:45:30.342 "impl_name": "ssl", 00:45:30.342 "recv_buf_size": 4096, 00:45:30.342 "send_buf_size": 4096, 00:45:30.342 "enable_recv_pipe": true, 00:45:30.342 "enable_quickack": false, 00:45:30.342 "enable_placement_id": 0, 00:45:30.342 "enable_zerocopy_send_server": true, 00:45:30.342 "enable_zerocopy_send_client": false, 00:45:30.342 "zerocopy_threshold": 0, 00:45:30.342 "tls_version": 0, 00:45:30.342 "enable_ktls": false 00:45:30.342 } 00:45:30.342 }, 00:45:30.342 { 00:45:30.342 "method": "sock_impl_set_options", 00:45:30.342 "params": { 00:45:30.342 "impl_name": "posix", 00:45:30.342 "recv_buf_size": 2097152, 00:45:30.342 "send_buf_size": 2097152, 00:45:30.342 "enable_recv_pipe": true, 00:45:30.342 "enable_quickack": false, 00:45:30.342 "enable_placement_id": 0, 00:45:30.342 "enable_zerocopy_send_server": true, 00:45:30.342 "enable_zerocopy_send_client": false, 00:45:30.342 "zerocopy_threshold": 0, 00:45:30.342 "tls_version": 0, 00:45:30.342 "enable_ktls": false 00:45:30.342 } 00:45:30.342 } 00:45:30.342 ] 00:45:30.342 }, 00:45:30.342 { 00:45:30.342 "subsystem": "vmd", 00:45:30.342 "config": [] 00:45:30.342 }, 00:45:30.342 { 00:45:30.342 "subsystem": "accel", 00:45:30.342 "config": [ 00:45:30.342 { 00:45:30.342 "method": "accel_set_options", 00:45:30.342 "params": { 00:45:30.342 "small_cache_size": 128, 00:45:30.342 "large_cache_size": 16, 00:45:30.342 "task_count": 2048, 00:45:30.342 "sequence_count": 2048, 00:45:30.342 "buf_count": 2048 00:45:30.342 } 00:45:30.342 } 00:45:30.342 ] 00:45:30.342 }, 00:45:30.342 { 00:45:30.342 "subsystem": "bdev", 00:45:30.342 "config": [ 00:45:30.342 { 00:45:30.342 "method": "bdev_set_options", 00:45:30.342 "params": { 00:45:30.342 "bdev_io_pool_size": 65535, 00:45:30.342 "bdev_io_cache_size": 256, 00:45:30.342 "bdev_auto_examine": true, 00:45:30.342 "iobuf_small_cache_size": 128, 00:45:30.342 "iobuf_large_cache_size": 16 00:45:30.342 } 00:45:30.342 }, 00:45:30.342 { 00:45:30.342 "method": "bdev_raid_set_options", 00:45:30.342 "params": { 00:45:30.342 "process_window_size_kb": 1024, 00:45:30.342 "process_max_bandwidth_mb_sec": 0 00:45:30.342 } 00:45:30.342 }, 00:45:30.342 { 00:45:30.342 "method": "bdev_iscsi_set_options", 00:45:30.342 "params": { 00:45:30.342 "timeout_sec": 30 00:45:30.342 } 00:45:30.342 }, 00:45:30.342 { 00:45:30.342 "method": "bdev_nvme_set_options", 00:45:30.342 "params": { 00:45:30.342 "action_on_timeout": "none", 00:45:30.342 "timeout_us": 0, 00:45:30.342 "timeout_admin_us": 0, 00:45:30.342 "keep_alive_timeout_ms": 10000, 00:45:30.342 "arbitration_burst": 0, 00:45:30.342 "low_priority_weight": 0, 00:45:30.342 "medium_priority_weight": 0, 00:45:30.342 "high_priority_weight": 0, 00:45:30.342 "nvme_adminq_poll_period_us": 10000, 00:45:30.342 "nvme_ioq_poll_period_us": 0, 00:45:30.342 "io_queue_requests": 512, 00:45:30.342 "delay_cmd_submit": true, 00:45:30.342 "transport_retry_count": 4, 00:45:30.342 "bdev_retry_count": 3, 00:45:30.342 "transport_ack_timeout": 0, 00:45:30.342 "ctrlr_loss_timeout_sec": 0, 00:45:30.342 "reconnect_delay_sec": 0, 00:45:30.342 "fast_io_fail_timeout_sec": 0, 00:45:30.342 "disable_auto_failback": false, 00:45:30.342 "generate_uuids": false, 00:45:30.342 "transport_tos": 0, 00:45:30.342 "nvme_error_stat": false, 00:45:30.342 "rdma_srq_size": 0, 00:45:30.342 "io_path_stat": false, 00:45:30.342 "allow_accel_sequence": false, 00:45:30.342 "rdma_max_cq_size": 0, 00:45:30.342 "rdma_cm_event_timeout_ms": 0, 00:45:30.342 "dhchap_digests": [ 00:45:30.342 "sha256", 00:45:30.342 "sha384", 00:45:30.342 "sha512" 00:45:30.342 ], 00:45:30.342 "dhchap_dhgroups": [ 00:45:30.342 "null", 00:45:30.342 "ffdhe2048", 00:45:30.342 "ffdhe3072", 00:45:30.342 "ffdhe4096", 00:45:30.342 "ffdhe6144", 00:45:30.342 "ffdhe8192" 00:45:30.342 ] 00:45:30.342 } 00:45:30.342 }, 00:45:30.342 { 00:45:30.342 "method": "bdev_nvme_attach_controller", 00:45:30.342 "params": { 00:45:30.342 "name": "nvme0", 00:45:30.342 "trtype": "TCP", 00:45:30.342 "adrfam": "IPv4", 00:45:30.342 "traddr": "127.0.0.1", 00:45:30.342 "trsvcid": "4420", 00:45:30.342 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:30.342 "prchk_reftag": false, 00:45:30.342 "prchk_guard": false, 00:45:30.342 "ctrlr_loss_timeout_sec": 0, 00:45:30.342 "reconnect_delay_sec": 0, 00:45:30.342 "fast_io_fail_timeout_sec": 0, 00:45:30.342 "psk": "key0", 00:45:30.342 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:30.342 "hdgst": false, 00:45:30.342 "ddgst": false, 00:45:30.342 "multipath": "multipath" 00:45:30.342 } 00:45:30.342 }, 00:45:30.342 { 00:45:30.342 "method": "bdev_nvme_set_hotplug", 00:45:30.342 "params": { 00:45:30.342 "period_us": 100000, 00:45:30.342 "enable": false 00:45:30.342 } 00:45:30.342 }, 00:45:30.342 { 00:45:30.342 "method": "bdev_wait_for_examine" 00:45:30.342 } 00:45:30.342 ] 00:45:30.342 }, 00:45:30.342 { 00:45:30.342 "subsystem": "nbd", 00:45:30.342 "config": [] 00:45:30.342 } 00:45:30.342 ] 00:45:30.342 }' 00:45:30.342 21:35:03 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:30.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:30.342 21:35:03 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:30.342 21:35:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:30.342 [2024-11-19 21:35:03.996540] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:45:30.342 [2024-11-19 21:35:03.996716] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3257282 ] 00:45:30.601 [2024-11-19 21:35:04.142472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:30.601 [2024-11-19 21:35:04.265285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:31.167 [2024-11-19 21:35:04.689362] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:31.425 21:35:04 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:31.425 21:35:04 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:45:31.425 21:35:04 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:45:31.425 21:35:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:31.425 21:35:04 keyring_file -- keyring/file.sh@121 -- # jq length 00:45:31.682 21:35:05 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:45:31.682 21:35:05 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:45:31.682 21:35:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:31.682 21:35:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:31.682 21:35:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:31.682 21:35:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:31.682 21:35:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:31.940 21:35:05 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:45:31.940 21:35:05 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:45:31.940 21:35:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:31.940 21:35:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:31.940 21:35:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:31.940 21:35:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:31.940 21:35:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:32.198 21:35:05 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:45:32.198 21:35:05 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:45:32.198 21:35:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:45:32.198 21:35:05 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:45:32.456 21:35:06 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:45:32.456 21:35:06 keyring_file -- keyring/file.sh@1 -- # cleanup 00:45:32.456 21:35:06 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.QQzcD9WSUh /tmp/tmp.sl8jwME5Vp 00:45:32.456 21:35:06 keyring_file -- keyring/file.sh@20 -- # killprocess 3257282 00:45:32.456 21:35:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3257282 ']' 00:45:32.456 21:35:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3257282 00:45:32.456 21:35:06 keyring_file -- common/autotest_common.sh@959 -- # uname 00:45:32.456 21:35:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:32.456 21:35:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3257282 00:45:32.456 21:35:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:32.456 21:35:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:32.456 21:35:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3257282' 00:45:32.456 killing process with pid 3257282 00:45:32.456 21:35:06 keyring_file -- common/autotest_common.sh@973 -- # kill 3257282 00:45:32.456 Received shutdown signal, test time was about 1.000000 seconds 00:45:32.456 00:45:32.456 Latency(us) 00:45:32.456 [2024-11-19T20:35:06.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:32.456 [2024-11-19T20:35:06.251Z] =================================================================================================================== 00:45:32.456 [2024-11-19T20:35:06.251Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:45:32.456 21:35:06 keyring_file -- common/autotest_common.sh@978 -- # wait 3257282 00:45:33.390 21:35:06 keyring_file -- keyring/file.sh@21 -- # killprocess 3255536 00:45:33.390 21:35:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3255536 ']' 00:45:33.390 21:35:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3255536 00:45:33.390 21:35:06 keyring_file -- common/autotest_common.sh@959 -- # uname 00:45:33.391 21:35:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:33.391 21:35:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3255536 00:45:33.391 21:35:07 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:33.391 21:35:07 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:33.391 21:35:07 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3255536' 00:45:33.391 killing process with pid 3255536 00:45:33.391 21:35:07 keyring_file -- common/autotest_common.sh@973 -- # kill 3255536 00:45:33.391 21:35:07 keyring_file -- common/autotest_common.sh@978 -- # wait 3255536 00:45:35.921 00:45:35.921 real 0m20.112s 00:45:35.921 user 0m45.660s 00:45:35.921 sys 0m3.696s 00:45:35.921 21:35:09 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:35.921 21:35:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:35.921 ************************************ 00:45:35.921 END TEST keyring_file 00:45:35.921 ************************************ 00:45:35.921 21:35:09 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:45:35.921 21:35:09 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:35.921 21:35:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:45:35.921 21:35:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:35.921 21:35:09 -- common/autotest_common.sh@10 -- # set +x 00:45:35.921 ************************************ 00:45:35.921 START TEST keyring_linux 00:45:35.921 ************************************ 00:45:35.921 21:35:09 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:35.921 Joined session keyring: 137009185 00:45:35.921 * Looking for test storage... 00:45:35.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:35.921 21:35:09 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:35.921 21:35:09 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:45:35.921 21:35:09 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:35.921 21:35:09 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@345 -- # : 1 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:35.921 21:35:09 keyring_linux -- scripts/common.sh@368 -- # return 0 00:45:35.921 21:35:09 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:35.921 21:35:09 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:35.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:35.921 --rc genhtml_branch_coverage=1 00:45:35.921 --rc genhtml_function_coverage=1 00:45:35.921 --rc genhtml_legend=1 00:45:35.921 --rc geninfo_all_blocks=1 00:45:35.921 --rc geninfo_unexecuted_blocks=1 00:45:35.921 00:45:35.921 ' 00:45:35.921 21:35:09 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:35.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:35.921 --rc genhtml_branch_coverage=1 00:45:35.921 --rc genhtml_function_coverage=1 00:45:35.921 --rc genhtml_legend=1 00:45:35.921 --rc geninfo_all_blocks=1 00:45:35.921 --rc geninfo_unexecuted_blocks=1 00:45:35.921 00:45:35.921 ' 00:45:35.921 21:35:09 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:35.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:35.921 --rc genhtml_branch_coverage=1 00:45:35.921 --rc genhtml_function_coverage=1 00:45:35.921 --rc genhtml_legend=1 00:45:35.921 --rc geninfo_all_blocks=1 00:45:35.921 --rc geninfo_unexecuted_blocks=1 00:45:35.921 00:45:35.921 ' 00:45:35.921 21:35:09 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:35.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:35.921 --rc genhtml_branch_coverage=1 00:45:35.921 --rc genhtml_function_coverage=1 00:45:35.922 --rc genhtml_legend=1 00:45:35.922 --rc geninfo_all_blocks=1 00:45:35.922 --rc geninfo_unexecuted_blocks=1 00:45:35.922 00:45:35.922 ' 00:45:35.922 21:35:09 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:35.922 21:35:09 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:35.922 21:35:09 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:45:35.922 21:35:09 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:35.922 21:35:09 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:35.922 21:35:09 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:35.922 21:35:09 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:35.922 21:35:09 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:35.922 21:35:09 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:35.922 21:35:09 keyring_linux -- paths/export.sh@5 -- # export PATH 00:45:35.922 21:35:09 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:35.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:35.922 21:35:09 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:35.922 21:35:09 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:35.922 21:35:09 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:35.922 21:35:09 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:45:35.922 21:35:09 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:45:35.922 21:35:09 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:45:35.922 21:35:09 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:45:35.922 21:35:09 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:35.922 21:35:09 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:45:35.922 21:35:09 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:35.922 21:35:09 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:35.922 21:35:09 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:45:35.922 21:35:09 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@733 -- # python - 00:45:35.922 21:35:09 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:45:35.922 21:35:09 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:45:35.922 /tmp/:spdk-test:key0 00:45:35.922 21:35:09 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:45:35.922 21:35:09 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:35.922 21:35:09 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:45:35.922 21:35:09 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:35.922 21:35:09 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:35.922 21:35:09 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:45:35.922 21:35:09 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:45:35.922 21:35:09 keyring_linux -- nvmf/common.sh@733 -- # python - 00:45:35.922 21:35:09 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:45:35.922 21:35:09 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:45:35.922 /tmp/:spdk-test:key1 00:45:35.922 21:35:09 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3258039 00:45:35.922 21:35:09 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:35.922 21:35:09 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3258039 00:45:35.922 21:35:09 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3258039 ']' 00:45:35.922 21:35:09 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:35.922 21:35:09 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:35.922 21:35:09 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:35.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:35.922 21:35:09 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:35.922 21:35:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:35.922 [2024-11-19 21:35:09.700133] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:45:35.922 [2024-11-19 21:35:09.700292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3258039 ] 00:45:36.180 [2024-11-19 21:35:09.845330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:36.180 [2024-11-19 21:35:09.974284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:37.553 21:35:10 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:37.553 21:35:10 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:45:37.553 21:35:10 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:45:37.553 21:35:10 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.553 21:35:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:37.553 [2024-11-19 21:35:10.929696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:37.553 null0 00:45:37.553 [2024-11-19 21:35:10.961723] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:37.553 [2024-11-19 21:35:10.962421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:37.553 21:35:10 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.553 21:35:10 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:45:37.553 27589987 00:45:37.553 21:35:10 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:45:37.553 836126369 00:45:37.553 21:35:10 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3258306 00:45:37.553 21:35:10 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:45:37.553 21:35:10 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3258306 /var/tmp/bperf.sock 00:45:37.553 21:35:10 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3258306 ']' 00:45:37.553 21:35:10 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:37.553 21:35:10 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:37.553 21:35:10 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:37.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:37.553 21:35:10 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:37.553 21:35:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:37.553 [2024-11-19 21:35:11.067386] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.03.0 initialization... 00:45:37.553 [2024-11-19 21:35:11.067517] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3258306 ] 00:45:37.553 [2024-11-19 21:35:11.210118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:37.553 [2024-11-19 21:35:11.346650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:38.487 21:35:12 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:38.487 21:35:12 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:45:38.487 21:35:12 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:45:38.487 21:35:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:45:38.746 21:35:12 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:45:38.746 21:35:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:45:39.313 21:35:12 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:39.313 21:35:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:39.572 [2024-11-19 21:35:13.166194] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:39.572 nvme0n1 00:45:39.572 21:35:13 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:45:39.572 21:35:13 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:45:39.572 21:35:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:39.572 21:35:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:39.572 21:35:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:39.572 21:35:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:39.831 21:35:13 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:45:39.831 21:35:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:39.831 21:35:13 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:45:39.831 21:35:13 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:45:39.831 21:35:13 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:39.831 21:35:13 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:45:39.831 21:35:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:40.089 21:35:13 keyring_linux -- keyring/linux.sh@25 -- # sn=27589987 00:45:40.089 21:35:13 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:45:40.089 21:35:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:40.089 21:35:13 keyring_linux -- keyring/linux.sh@26 -- # [[ 27589987 == \2\7\5\8\9\9\8\7 ]] 00:45:40.089 21:35:13 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 27589987 00:45:40.089 21:35:13 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:45:40.089 21:35:13 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:40.348 Running I/O for 1 seconds... 00:45:41.369 6920.00 IOPS, 27.03 MiB/s 00:45:41.369 Latency(us) 00:45:41.369 [2024-11-19T20:35:15.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:41.369 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:45:41.369 nvme0n1 : 1.02 6927.77 27.06 0.00 0.00 18299.93 6602.15 26020.22 00:45:41.369 [2024-11-19T20:35:15.164Z] =================================================================================================================== 00:45:41.369 [2024-11-19T20:35:15.164Z] Total : 6927.77 27.06 0.00 0.00 18299.93 6602.15 26020.22 00:45:41.369 { 00:45:41.369 "results": [ 00:45:41.369 { 00:45:41.369 "job": "nvme0n1", 00:45:41.369 "core_mask": "0x2", 00:45:41.369 "workload": "randread", 00:45:41.369 "status": "finished", 00:45:41.369 "queue_depth": 128, 00:45:41.369 "io_size": 4096, 00:45:41.369 "runtime": 1.017499, 00:45:41.369 "iops": 6927.770936384212, 00:45:41.369 "mibps": 27.06160522025083, 00:45:41.369 "io_failed": 0, 00:45:41.369 "io_timeout": 0, 00:45:41.369 "avg_latency_us": 18299.92674264277, 00:45:41.369 "min_latency_us": 6602.145185185185, 00:45:41.369 "max_latency_us": 26020.21925925926 00:45:41.369 } 00:45:41.369 ], 00:45:41.369 "core_count": 1 00:45:41.369 } 00:45:41.369 21:35:14 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:41.369 21:35:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:41.627 21:35:15 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:45:41.627 21:35:15 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:45:41.627 21:35:15 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:41.628 21:35:15 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:41.628 21:35:15 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:41.628 21:35:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:41.886 21:35:15 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:45:41.886 21:35:15 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:41.886 21:35:15 keyring_linux -- keyring/linux.sh@23 -- # return 00:45:41.886 21:35:15 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:41.886 21:35:15 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:45:41.886 21:35:15 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:41.886 21:35:15 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:41.886 21:35:15 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:41.886 21:35:15 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:41.886 21:35:15 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:41.886 21:35:15 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:41.886 21:35:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:42.145 [2024-11-19 21:35:15.777218] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_re[2024-11-19 21:35:15.777223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 ad_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:42.145 (107): Transport endpoint is not connected 00:45:42.145 [2024-11-19 21:35:15.778203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (9): Bad file descriptor 00:45:42.145 [2024-11-19 21:35:15.779198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:45:42.145 [2024-11-19 21:35:15.779226] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:42.145 [2024-11-19 21:35:15.779247] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:42.145 [2024-11-19 21:35:15.779275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:45:42.145 request: 00:45:42.145 { 00:45:42.145 "name": "nvme0", 00:45:42.145 "trtype": "tcp", 00:45:42.145 "traddr": "127.0.0.1", 00:45:42.145 "adrfam": "ipv4", 00:45:42.145 "trsvcid": "4420", 00:45:42.145 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:42.145 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:42.145 "prchk_reftag": false, 00:45:42.145 "prchk_guard": false, 00:45:42.145 "hdgst": false, 00:45:42.145 "ddgst": false, 00:45:42.145 "psk": ":spdk-test:key1", 00:45:42.145 "allow_unrecognized_csi": false, 00:45:42.145 "method": "bdev_nvme_attach_controller", 00:45:42.145 "req_id": 1 00:45:42.145 } 00:45:42.145 Got JSON-RPC error response 00:45:42.145 response: 00:45:42.145 { 00:45:42.145 "code": -5, 00:45:42.145 "message": "Input/output error" 00:45:42.145 } 00:45:42.145 21:35:15 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:45:42.145 21:35:15 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:42.145 21:35:15 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:42.145 21:35:15 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:42.145 21:35:15 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:45:42.145 21:35:15 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:42.145 21:35:15 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:45:42.145 21:35:15 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:45:42.145 21:35:15 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:45:42.145 21:35:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:42.145 21:35:15 keyring_linux -- keyring/linux.sh@33 -- # sn=27589987 00:45:42.145 21:35:15 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 27589987 00:45:42.145 1 links removed 00:45:42.145 21:35:15 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:42.145 21:35:15 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:45:42.145 21:35:15 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:45:42.145 21:35:15 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:45:42.145 21:35:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:45:42.145 21:35:15 keyring_linux -- keyring/linux.sh@33 -- # sn=836126369 00:45:42.145 21:35:15 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 836126369 00:45:42.145 1 links removed 00:45:42.145 21:35:15 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3258306 00:45:42.145 21:35:15 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3258306 ']' 00:45:42.145 21:35:15 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3258306 00:45:42.145 21:35:15 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:45:42.145 21:35:15 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:42.145 21:35:15 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3258306 00:45:42.145 21:35:15 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:42.145 21:35:15 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:42.145 21:35:15 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3258306' 00:45:42.145 killing process with pid 3258306 00:45:42.145 21:35:15 keyring_linux -- common/autotest_common.sh@973 -- # kill 3258306 00:45:42.145 Received shutdown signal, test time was about 1.000000 seconds 00:45:42.145 00:45:42.145 Latency(us) 00:45:42.145 [2024-11-19T20:35:15.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:42.145 [2024-11-19T20:35:15.940Z] =================================================================================================================== 00:45:42.145 [2024-11-19T20:35:15.940Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:42.145 21:35:15 keyring_linux -- common/autotest_common.sh@978 -- # wait 3258306 00:45:43.081 21:35:16 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3258039 00:45:43.081 21:35:16 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3258039 ']' 00:45:43.081 21:35:16 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3258039 00:45:43.081 21:35:16 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:45:43.081 21:35:16 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:43.081 21:35:16 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3258039 00:45:43.081 21:35:16 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:43.081 21:35:16 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:43.081 21:35:16 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3258039' 00:45:43.081 killing process with pid 3258039 00:45:43.081 21:35:16 keyring_linux -- common/autotest_common.sh@973 -- # kill 3258039 00:45:43.081 21:35:16 keyring_linux -- common/autotest_common.sh@978 -- # wait 3258039 00:45:45.611 00:45:45.611 real 0m9.632s 00:45:45.611 user 0m16.640s 00:45:45.611 sys 0m1.924s 00:45:45.611 21:35:18 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:45.611 21:35:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:45.611 ************************************ 00:45:45.611 END TEST keyring_linux 00:45:45.611 ************************************ 00:45:45.611 21:35:19 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:45:45.611 21:35:19 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:45:45.612 21:35:19 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:45:45.612 21:35:19 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:45:45.612 21:35:19 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:45:45.612 21:35:19 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:45:45.612 21:35:19 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:45:45.612 21:35:19 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:45:45.612 21:35:19 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:45:45.612 21:35:19 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:45:45.612 21:35:19 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:45:45.612 21:35:19 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:45:45.612 21:35:19 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:45:45.612 21:35:19 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:45:45.612 21:35:19 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:45:45.612 21:35:19 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:45:45.612 21:35:19 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:45:45.612 21:35:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:45.612 21:35:19 -- common/autotest_common.sh@10 -- # set +x 00:45:45.612 21:35:19 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:45:45.612 21:35:19 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:45:45.612 21:35:19 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:45:45.612 21:35:19 -- common/autotest_common.sh@10 -- # set +x 00:45:47.509 INFO: APP EXITING 00:45:47.509 INFO: killing all VMs 00:45:47.509 INFO: killing vhost app 00:45:47.509 INFO: EXIT DONE 00:45:48.444 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:45:48.444 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:45:48.444 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:45:48.444 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:45:48.444 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:45:48.444 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:45:48.444 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:45:48.444 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:45:48.444 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:45:48.444 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:45:48.444 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:45:48.444 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:45:48.444 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:45:48.444 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:45:48.444 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:45:48.444 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:45:48.444 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:45:49.819 Cleaning 00:45:49.819 Removing: /var/run/dpdk/spdk0/config 00:45:49.819 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:49.819 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:49.819 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:49.819 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:49.819 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:45:49.819 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:45:49.819 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:45:49.819 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:45:49.820 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:49.820 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:49.820 Removing: /var/run/dpdk/spdk1/config 00:45:49.820 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:45:49.820 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:45:49.820 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:45:49.820 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:45:49.820 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:45:49.820 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:45:49.820 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:45:49.820 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:45:49.820 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:45:49.820 Removing: /var/run/dpdk/spdk1/hugepage_info 00:45:49.820 Removing: /var/run/dpdk/spdk2/config 00:45:49.820 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:45:49.820 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:45:49.820 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:45:49.820 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:45:49.820 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:45:49.820 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:45:49.820 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:45:49.820 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:45:49.820 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:45:49.820 Removing: /var/run/dpdk/spdk2/hugepage_info 00:45:49.820 Removing: /var/run/dpdk/spdk3/config 00:45:49.820 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:45:49.820 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:45:49.820 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:45:49.820 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:45:49.820 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:45:49.820 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:45:49.820 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:45:49.820 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:45:49.820 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:45:49.820 Removing: /var/run/dpdk/spdk3/hugepage_info 00:45:49.820 Removing: /var/run/dpdk/spdk4/config 00:45:49.820 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:45:49.820 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:45:49.820 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:45:49.820 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:45:49.820 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:45:49.820 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:45:49.820 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:45:49.820 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:45:49.820 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:45:49.820 Removing: /var/run/dpdk/spdk4/hugepage_info 00:45:49.820 Removing: /dev/shm/bdev_svc_trace.1 00:45:49.820 Removing: /dev/shm/nvmf_trace.0 00:45:49.820 Removing: /dev/shm/spdk_tgt_trace.pid2844386 00:45:49.820 Removing: /var/run/dpdk/spdk0 00:45:49.820 Removing: /var/run/dpdk/spdk1 00:45:49.820 Removing: /var/run/dpdk/spdk2 00:45:49.820 Removing: /var/run/dpdk/spdk3 00:45:49.820 Removing: /var/run/dpdk/spdk4 00:45:49.820 Removing: /var/run/dpdk/spdk_pid2841463 00:45:49.820 Removing: /var/run/dpdk/spdk_pid2842603 00:45:49.820 Removing: /var/run/dpdk/spdk_pid2844386 00:45:49.820 Removing: /var/run/dpdk/spdk_pid2845197 00:45:49.820 Removing: /var/run/dpdk/spdk_pid2846653 00:45:49.820 Removing: /var/run/dpdk/spdk_pid2847079 00:45:49.820 Removing: /var/run/dpdk/spdk_pid2848046 00:45:49.820 Removing: /var/run/dpdk/spdk_pid2848196 00:45:49.820 Removing: /var/run/dpdk/spdk_pid2848841 00:45:49.820 Removing: /var/run/dpdk/spdk_pid2850185 00:45:49.820 Removing: /var/run/dpdk/spdk_pid2851365 00:45:49.820 Removing: /var/run/dpdk/spdk_pid2851963 00:45:49.820 Removing: /var/run/dpdk/spdk_pid2852556 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2853167 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2853678 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2853926 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2854083 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2854400 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2854851 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2857606 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2858049 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2858606 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2858748 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2860100 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2860237 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2861482 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2861622 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2862169 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2862313 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2862741 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2862890 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2863930 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2864146 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2864416 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2867055 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2869835 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2877701 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2878121 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2880793 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2881074 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2883988 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2887977 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2890315 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2897650 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2903280 00:45:50.078 Removing: /var/run/dpdk/spdk_pid2904608 00:45:50.079 Removing: /var/run/dpdk/spdk_pid2905524 00:45:50.079 Removing: /var/run/dpdk/spdk_pid2917330 00:45:50.079 Removing: /var/run/dpdk/spdk_pid2919902 00:45:50.079 Removing: /var/run/dpdk/spdk_pid2977985 00:45:50.079 Removing: /var/run/dpdk/spdk_pid2981421 00:45:50.079 Removing: /var/run/dpdk/spdk_pid2985660 00:45:50.079 Removing: /var/run/dpdk/spdk_pid2991890 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3022018 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3025202 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3026384 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3027851 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3028123 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3028405 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3028682 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3029635 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3031100 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3032490 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3033188 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3035073 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3035774 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3036599 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3039283 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3043056 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3043057 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3043058 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3045429 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3047785 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3052024 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3075560 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3079206 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3083243 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3084763 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3086444 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3087943 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3090984 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3094093 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3096807 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3101493 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3101509 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3104655 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3104796 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3104936 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3105322 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3105328 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3106532 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3107709 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3109005 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3110687 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3111864 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3113097 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3117106 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3117563 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3118948 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3119805 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3123797 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3125923 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3129739 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3133327 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3140818 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3145430 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3145471 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3158574 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3159234 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3159835 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3160451 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3161551 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3162093 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3162753 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3163298 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3166196 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3166473 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3170634 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3171178 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3174832 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3177694 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3184767 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3185291 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3187926 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3188205 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3191098 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3195044 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3197336 00:45:50.079 Removing: /var/run/dpdk/spdk_pid3205024 00:45:50.337 Removing: /var/run/dpdk/spdk_pid3210580 00:45:50.337 Removing: /var/run/dpdk/spdk_pid3212014 00:45:50.337 Removing: /var/run/dpdk/spdk_pid3212801 00:45:50.337 Removing: /var/run/dpdk/spdk_pid3223637 00:45:50.337 Removing: /var/run/dpdk/spdk_pid3226166 00:45:50.337 Removing: /var/run/dpdk/spdk_pid3228304 00:45:50.337 Removing: /var/run/dpdk/spdk_pid3233753 00:45:50.338 Removing: /var/run/dpdk/spdk_pid3233874 00:45:50.338 Removing: /var/run/dpdk/spdk_pid3236911 00:45:50.338 Removing: /var/run/dpdk/spdk_pid3239041 00:45:50.338 Removing: /var/run/dpdk/spdk_pid3240556 00:45:50.338 Removing: /var/run/dpdk/spdk_pid3241459 00:45:50.338 Removing: /var/run/dpdk/spdk_pid3243074 00:45:50.338 Removing: /var/run/dpdk/spdk_pid3244069 00:45:50.338 Removing: /var/run/dpdk/spdk_pid3249744 00:45:50.338 Removing: /var/run/dpdk/spdk_pid3250133 00:45:50.338 Removing: /var/run/dpdk/spdk_pid3250531 00:45:50.338 Removing: /var/run/dpdk/spdk_pid3252409 00:45:50.338 Removing: /var/run/dpdk/spdk_pid3252689 00:45:50.338 Removing: /var/run/dpdk/spdk_pid3253090 00:45:50.338 Removing: /var/run/dpdk/spdk_pid3255536 00:45:50.338 Removing: /var/run/dpdk/spdk_pid3255680 00:45:50.338 Removing: /var/run/dpdk/spdk_pid3257282 00:45:50.338 Removing: /var/run/dpdk/spdk_pid3258039 00:45:50.338 Removing: /var/run/dpdk/spdk_pid3258306 00:45:50.338 Clean 00:45:50.338 21:35:23 -- common/autotest_common.sh@1453 -- # return 0 00:45:50.338 21:35:23 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:45:50.338 21:35:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:50.338 21:35:23 -- common/autotest_common.sh@10 -- # set +x 00:45:50.338 21:35:24 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:45:50.338 21:35:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:50.338 21:35:24 -- common/autotest_common.sh@10 -- # set +x 00:45:50.338 21:35:24 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:50.338 21:35:24 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:45:50.338 21:35:24 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:45:50.338 21:35:24 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:45:50.338 21:35:24 -- spdk/autotest.sh@398 -- # hostname 00:45:50.338 21:35:24 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:45:50.596 geninfo: WARNING: invalid characters removed from testname! 00:46:22.669 21:35:53 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:23.606 21:35:57 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:26.912 21:36:00 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:29.447 21:36:03 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:32.737 21:36:06 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:35.269 21:36:08 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:38.557 21:36:11 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:46:38.557 21:36:11 -- spdk/autorun.sh@1 -- $ timing_finish 00:46:38.557 21:36:11 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:46:38.557 21:36:11 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:46:38.557 21:36:11 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:46:38.557 21:36:11 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:46:38.557 + [[ -n 2770191 ]] 00:46:38.557 + sudo kill 2770191 00:46:38.566 [Pipeline] } 00:46:38.580 [Pipeline] // stage 00:46:38.585 [Pipeline] } 00:46:38.598 [Pipeline] // timeout 00:46:38.602 [Pipeline] } 00:46:38.614 [Pipeline] // catchError 00:46:38.619 [Pipeline] } 00:46:38.632 [Pipeline] // wrap 00:46:38.638 [Pipeline] } 00:46:38.650 [Pipeline] // catchError 00:46:38.657 [Pipeline] stage 00:46:38.659 [Pipeline] { (Epilogue) 00:46:38.670 [Pipeline] catchError 00:46:38.672 [Pipeline] { 00:46:38.683 [Pipeline] echo 00:46:38.685 Cleanup processes 00:46:38.690 [Pipeline] sh 00:46:38.970 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:38.970 3272434 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:38.984 [Pipeline] sh 00:46:39.265 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:39.265 ++ awk '{print $1}' 00:46:39.265 ++ grep -v 'sudo pgrep' 00:46:39.265 + sudo kill -9 00:46:39.265 + true 00:46:39.276 [Pipeline] sh 00:46:39.558 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:46:51.816 [Pipeline] sh 00:46:52.104 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:46:52.104 Artifacts sizes are good 00:46:52.121 [Pipeline] archiveArtifacts 00:46:52.130 Archiving artifacts 00:46:52.271 [Pipeline] sh 00:46:52.552 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:46:52.568 [Pipeline] cleanWs 00:46:52.580 [WS-CLEANUP] Deleting project workspace... 00:46:52.580 [WS-CLEANUP] Deferred wipeout is used... 00:46:52.587 [WS-CLEANUP] done 00:46:52.589 [Pipeline] } 00:46:52.609 [Pipeline] // catchError 00:46:52.626 [Pipeline] sh 00:46:52.914 + logger -p user.info -t JENKINS-CI 00:46:52.924 [Pipeline] } 00:46:52.937 [Pipeline] // stage 00:46:52.942 [Pipeline] } 00:46:52.955 [Pipeline] // node 00:46:52.960 [Pipeline] End of Pipeline 00:46:52.996 Finished: SUCCESS